manage environment variables on the hosting platform
Hello JATOS community,
I’m currently working on a study hosted on a JATOS server and need to securely manage sensitive data such as API keys for an interaction study with Large Language Models. Ideally, I’d like to use environment variables for this purpose.
Does JATOS provide a way to set and manage environment variables on the hosting platform? If so, how can this be implemented? Are there specific configurations or best practices to follow to ensure these variables remain secure and accessible to the application?
We have set up already a JATOS Server on our University of Freiburg Server (https://weblab.psychologie.uni-freiburg.de/jatos).
Any advice, examples, or documentation references would be greatly appreciated!
Thanks in advance for your help!
Best,
Julius
Comments
Hi @Fenn_CAM ,
Can you clarify how you would like to interact with the LLM? My guess is that you would like the experiment itself (so the code that runs in the browser of the participants) to interact with the LMM. Is that correct?
— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!
Hi,
That is correct - the code is running on the client side (participants' browser) and they are sending API calls from their client (see code snippet below*). So presumably the API_KEY needs to be openly accessible, right?
Best regards
Julius
*
try {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${CONFIG.OPENAI_API_KEY}`
},
body: JSON.stringify({
"model": "gpt-3.5-turbo",
"messages": [
...chatMemory,
{"role": "user", "content": userInput}
]
})
});
if (!response.ok) {
throw new Error('An error occurred in the request to the \'API');
}
..
}
It's always problematic security-wise to have API keys somewhere in the code on the client side because every code on the browser side can potentially be seen be the user. AFAIK the safe way to do this is to have yet another HTTP endpoint that has the only purpose to hand you the API key, this way you wouldn't have to have it in the code - but you'd need some kind of authentication for this endpoint, otherwise everyone could just request the API key. I don't think this approach is feasible for you.
There is no way that JATOS hands over system's environment variables to the client side.
But, I think, you can use JATOS' study, component or batch input to store the API key and hand it over to the client side. Those 'input' data are stored in JATOS database and handed over during initialization of a study run. You can then access them in your JS code with
jatos.studyJsonInput,jatos.componentJsonInput, orjatos.batchJsonInput. This way your API keys aren't hard-coded.I hope this helps,
Kristian
Hey Kristian,
thank you so much for your response! I also thought about setting up an API endoint (Next.js API hosted on https://vercel.com), but I would be unsecure how to secure the authentication for this endpoint (JSON Web Token,...)?
My teams wants the study to be set up in few days and so I will stick to your second simpler approach.
Thanks again!
Best,
Julius
Hi @Fenn_CAM ,
If you are going to expose the token through the experiment code (even if it's not hard-coded as @kri suggests), then there is a very real possibility of it being intercepted and used by someone else. So absolutely do set a limit on the consumption for that token, and revoke it as soon as the study is done.
In general, and again as @kri says, you would generally implement your own end-point on a custom server, and access the OpenAI through that route. But that would indeed take more than a few days to implement.
— Sebastiaan
Check out SigmundAI.eu for our OpenSesame AI assistant!