Skip to main content

There is a request to develop a Box app internally.


Assuming that we do not provide the app developers with administrator or co-administrator accounts, for a custom app that uses OAuth 2.0 (server authentication), the anticipated risks include being limited to accessing folders/files within the scope of the account used to log in to the app. However, are there any other anticipated risks, particularly related to security?

Hi @TOYOHIDE_SUMITA,



We have listed some best practices that we strongly recommend developers to implement, and although these are quite common, sometimes developers do cut corners.



Here is a summary:



Whatever authentication you choose for your app, the client secret is confidential and should be protected. The client secret can always be re-generated in the configuration tab of your app in your developer console.



However you cache the tokens, either in a cache file or database, be sure to protect them. I personally like to encrypt them.



You can always downscope a token to give a more restrictive access. This is particularly important if you are using the UI Elements, such as the Box Explorer in the context of a web app, since javascript is not safe and can be inspected in the browser.



You can always revoke a token, for example in an OAuth2 context when a user logs out.



If you want to read more details into all the combinations of application types and authentication type, here are some pointers to our documentation:





Of course, feel free to reach out with any question.



Best regards


Hello @rbarbosa



Thank you for your reply.



As an example, if an application I have developed keeps a cache file of tokens on your local machine,


If I perform a login to that application and the cache file is generated and then passed to a third party


It’s possible for it to act as the authorization for a development application created by a third party within the scope of my account’s authorization.



Does that mean that encrypting the cache file is more secure because it cannot be reused if it is leaked?



Best regards


Hi @TOYOHIDE_SUMITA ,



Let me take a step back just for context and clarification.



To use the API your app needs an access token, and access tokens have a 60 minute life span.


After that life span your app needs to retrieve a new access token.





  • Developer access tokens can not be refreshed via the API, you have to manually refresh them in the developer console


  • OAuth2 includes a refresh token with a 60 day life span. Your app uses the refresh token to get a new access token, and in the process also gets a new refresh token. If the refresh token is expired, then the user must re-authorize the app.


  • CCG (Client Credential Grant) - There is no refresh token. Your app requests a new access token using the client id and client secret.


  • JWT (JSON Web Token) - There is no refresh token. Your app requests a new access token using a JWT assertion.




For the access token being used server side, it is considered secure. However if you pass this access token to the client side, for example when using a JavaScript UI Element like Explorer, then your app should downscope the token, because any one can inspect the browser JavaScript. This is a common issue with any app using JavaScript.



For the OAuth2 refresh token, and since it is specific for each user, I typically store it in a database, along with the users information. Treat this refresh token the same way you would with any password.


The difference here is that applications typically don’t even store passwords, they store one way hashes.


If the database gets compromised then it is very hard to get the password.


The refresh token is slightly different, because the application needs to retrieve it, so a hash will not work. This is why I encrypt the refresh token. This way even if the database gets compromised, the refresh tokens are safe.



For the CCG and JWT, it is usually considered safe to store them locally on the server side.


It is common for developers to use .env files to configure environment variables on the server, and then use them in the application. This is supported in many development languages.



Typically in a production environment there is no need to encrypt these files.


However when building a CI/CD lets say using GitHub, the support for secrets, encryption and files is quite hard, and we must prevent any secrets from ending up in GitHub.



For example suppose you want to run some tests in your CI/CD process and you want to use JWT authentication.


There are 8 attributes in the configuration file, so you have 3 options:





  • Create 8 secrets in GitHub and use them in your tests


  • Create 1 secret in GitHub with a base64 encoding, decode the secret into a file and use it in your code.


  • Create 1 secret with your encryption key and use it to decrypt a file




Sometimes the later is just more practical.



Cheers


Hello



Thank you for sharing the details.


I will refer to the contents.



Best regards


Reply