Admittedly, the root problem in all of the specific design and implementation mistakes we've mentioned is not the increased transparency caused by Ajax. In MyLocalWeatherForecast.com, the real problem was the lack of proper authorization on the server. The programmers assumed that because the only pages calling the administrative functions already required authorization, then no further authorization was necessary. If they had implemented additional authorization checking in the server code, then the attacks would not have been successful. While the transparency of the client code did not cause the vulnerability, it did contribute to the vulnerability by advertising the existence of the functionality. Similarly, it does an attacker little good to learn the data types of the server API method parameters if those parameters are properly validated on the server. However, the increased transparency of the application provides an attacker with more information about how your application operates and makes it more likely that any mistakes or vulnerabilities in the validation code will be found and exploited.
It may sound as if we're advocating an approach of security through obscurity, but in fact this is the complete opposite of the truth. It is generally a poor idea to assume that if your application is difficult to understand or reverse-engineer, then it will be safe from attack. The biggest problem with this approach is that it relies on the attacker's lack of persistence in carrying out an attack. There is no roadblock that obscurity can throw up against an attacker that cannot be overcome with enough time and patience. Some roadblocks are bigger than others; for example, 2048-bit asymmetric key encryption is going to present quite a challenge to a would-be hacker. Still, with enough time and patience (and cleverness) the problems this encryption method presents are not insurmountable. The attacker may decide that the payout is worth the effort, or he may just see the defense as a challenge and attack the problem that much harder.
That being said, while it's a bad idea to rely on security through obscurity, a little extra obscurity never hurts. Obscuring application logic raises the bar for an attacker, possibly stopping those without the skills or the patience to de-obfuscate the code. It is best to look at obscurity as one component of a complete defense and not a defense in and of itself. Banks don't advertise the routes and schedules that their armored cars take, but this secrecy is not the only thing keeping the burglars out: The banks also have steel vaults and armed guards to protect the money. Take this approach to securing your Ajax applications. Some advertisement of the application logic is necessary due to the requirements of Ajax, but always attempt to minimize it, and keep some (virtual) vaults and guards around in case someone figures it out.
Do obfuscate important application logic code. Often this simple step is enough to deter the script kiddie or casual hacker who doesn't have the patience or the skills necessary to recreate the original. However, always remember that everything that is sent to the client, even obfuscated code, is readable.
In my view, To make your login credentials secure over the network you can use some encryption algorithm for password. for more information on encryption algorithm refer the links.. -us/library/wet69s13.aspx, Salting Your Password: Best Practices
You might think it's easy to determine the target origin, but it's frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.
Configure your application to simply know its target origin: It's your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as it's defined server side, so it is a trusted value. However, this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! (Note: Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)
Use the Host header value: If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.
You can configure jQuery to automatically add the token to all request headers by adopting the following code snippet. This provides a simple and convenient CSRF protection for your AJAX based applications:
The last bullet is a key feature of GWT that makes building AJAX functionality quite simple. But if the RPC interface you build is not designed correctly, you could be opening up the server-side of your application to vulnerability. The rest of this article will cover some design principles that should be considered if you want to be sure that the communication between the client and server is secure.
It is important to note that this article is not meant to be a guide on how to build a GWT RPC interface. The point of this article is to explain how to design your interface in a secure way once you are already familiar with the implementation details. Please refer to the GWT RPC documentation for details on how to build, use, and deploy an interface if you are not already familiar with it.
The reason for having an RPC interface is so that you can implement your web application in a way that utilizes code running both in the client browser and on the server. Code running on the server would typically be used to read and write a database or to invoke functionality that cannot be done from a sandboxed web browser, i.e., sending an email or connecting to servers other than the server that served up the client code (most browsers will block this to prevent cross-site scripting attacks).
But note that the server-side of your RPC service is still open to be called even by someone who has access to your web application but may not be able to access in any usable way the client code meant to invoke that interface.
A very important point to be made here (and the crux of this article) is that all of the function calls hosted on your server can be called by anyone that has access to your web application regardless of whatever you put in your client code. As illustrated in the above section you cannot rely on your client code to guard what can be done through your RPC interface.
John Fox is currently Director of Engineering at Savant Protection, a provider of application whitelisting security solutions. He has over 20 years of professional experience designing and developing enterprise software solutions, and 14 of those years specifically spent on security solutions. He has held senior engineering positions with firms such as Symantec and Lucent Technologies Bell Laboratories, as well as numerous startup ventures. John has been devoted to understanding and providing solutions for the needs of CIOs and IT managers for managing corporate data as securely as possible while allowing their company to operate as efficiently as possible. He is particularly familiar with IT-GRC, compliance, risk management, intrusion detection and prevention, as well as the internals of Windows, UNIX, and Android. John holds a B.S. in Computer and Systems Engineering from Rensselaer Polytechnic Institute and lives in the Boston area. He blogs at -whitelisting-blog/ and and is a researcher with InfoSec Institute.
The anti-CSRF token described above is set upon login in the user session cookie and then verified by every form. In most cases, this protection is enough. However, some sites prefer to use a more secure approach. To achieve a good compromise between security and usability, you can generate separate tokens for each form.
If your web page or web application is very busy and server storage is limited, you probably want to avoid persisting tokens on the server side. In these cases, you can generate and process tokens cryptographically. With this approach, there is no need to store the token in the server session:
Anti-CSRF tokens are one of the safest ways to defend against CSRF attacks, but they can be bypassed in some circumstances. For example, if the web application has a cross-site scripting vulnerability (XSS), the attacker may exploit it to execute a script that silently fetches a new version of the form with the current CSRF token. To prevent this and maintain solid web application security, make sure you check your web application for all types of vulnerabilities, not just CSRF. 153554b96e