INTRODUCTION 1.1 INFORMATION CONSISTENCY Numerous developments are being introduced as the epoch of Cloud Computing, which is an cyberspace-based progress and use of supercomputer expertise. Most powerful processors which were too expensive to begin with have become cheaper by the help of pooling processing power and providing the processing power on demand.
The development of high speed internet with increased bandwidth have increased the quality of services leading to better customer satisfaction which the most primitive goal of any organization. The migration of data from the users computer to the remote data centers have the provided the customer with great and reliable convenience. Amazon simple storage services are the well-known examples which are one of the pioneers of cloud services. The eliminate the need of maintain the data on a local system which is a huge boost for increasing quality of service. But due to this the customers are always as the mercifulness of the cloud service provider as their downtime causes the user to be unable to access his own data.
Since every coin has two sides, likewise cloud computing has its own fair share of security threats and also there may be some threats which are yet to be discovered. Considering from the users point of view, he wants his data to be secure therefore, data security is the most important aspect which will ultimately lead to the customer satisfaction. The users have limited control on their own data so the conventional cryptography measures cannot be adopted. Thus, the data stored on the cloud should be verified occasionally to ensure the data has not been modified without informing the owner. The data which is rarely used is sometimes moved to lower tier storage making it more vulnerable for attacks. On the other note, Cloud Computing not only stores the data but also provides the user with functionality like modifying the data, appending some information to it or permanently deleting the data.
To assure the integrity of data various hashing algorithms can be used to create checksums which will alert the user about the data modifications. 1.2 PROBLEM DEFINITION Firstly, traditional cryptographic primitives for the purpose of data security protection cannot be directly adopted due to the users loss control of data under Cloud Computing. Whenever it comes to the matter relating to cloud services the user is put at a disadvantage regarding to the security of the file. Basically the file is stored on a server which is a pool resource that is any one with users credentials can access the file and if in case the attacker comes to know about the password as well as the encryption keys the attacker can modify the file contents, thus making the information stored in the file to be accessed by the unauthorized user.
So, the problem is that what if someone copys your work and claims to be his own work. Anything we design , anything we invent is governed by the principle of whether or not it guarantees customer satisfaction. Hence, the problem is underlying whether the customer can rest assured that his data is safe from unauthorized access or not. 1.3 PROJECT PURPOSE In our purposed system, we provide assurance to the user that his information is safe by implementing a system which provides security mechanisms by offering three levels of security. Concerning about the data security part, our system is divided mainly into three modules named IP triggering module, client-authentication module and redirecting module.
The system generates a user password and a key which is used for client authentication. The algorithm generates two keywords 8 bit length consisting of combinations of characters, special characters, and numbers which is used for client authorization and file authorization. Questions may arise as why do we use keys of 8 bit length only The purpose of our system is to prevent illegal data access if the users credential are compromised. By testing against weak algorithms which are easier to crack we design our system to be more robust.
1.4 PROJECT FEATURES Our scheme would be to prevent illegal access of users data. A user after getting himself registered on the system will have the advantage of different layers of security.
The most primitive work our system is to inform the user that his data has been accessed from an unregistered ip by using mail triggering events. For login, the attacker tries to access the file by using the credentials stolen from the victim, and upon entering is provided with a dialog box to enter a key. The attacker tries to enter the key which wont be accepted by any means. The attacker is provided with a three tries so that he can go back. After 3 tries, the attacker is provided with the access of the fake file which is implemented by the redirection module. 1.
5 MODULES DESCRIPTION 1.5.1 CLOUD STORAGE Data outsourcing to cloud storage servers is raising trend among many firms and users owing to its economic advantages. This essentially means that the owner (client) of the data moves its data to a third party cloud storage server which is supposed to – presumably for a fee – faithfully store the data with it and provide it back to the owner whenever required. Cloud storage increases maintainability and decreases storage cost associated with storage. 1.
5.2 SIMPLY ARCHIVES This problem tries to obtain and verify a proof that the data that is stored by a user at remote data storage in the cloud (called cloud storage archives or simply archives) is not modified by the archive and thereby the integrity of the data is assured. The file is encrypted using symmetric key algorithms ( same key is used for encryption and decryption of data) before storing it in cloud storage. Cloud archive is not cheating the owner, if cheating, in this context, means that the storage archive might delete some of the data or may modify some of the data. While developing proofs for data possession at untrusted cloud storage servers we are often limited by the resources at the cloud server as well as at the client. 1.
5.3 SENTINELS In this scheme, unlike in the key-hash approach scheme, only a single key can be used irrespective of the size of the file or the number of files whose retrievability it wants to verify. Also the archive needs to access only a small portion of the file F unlike in the key-has scheme which required the archive to process the entire file F for each protocol verification.
If the prover has modified or deleted a substantial portion of F, then with high probability it will also have suppressed a number of sentinels. 1.5.
4 VERIFICATION PHASE The verifier before storing the file at the archive preprocesses the file and appends some Meta data to the file and stores at the archive. At the time of verification the verifier uses this Meta data to verify the integrity of the data. If the metadata matches the already stored metadata in database then there is inconsistency in file and user user is alerted with a warning message.
l It is important to note that our proof of information consistency protocol just checks the integrity of data i.e. if the data has been illegally modified or deleted. It does not prevent the archive from modifying the data.
CHAPTER 2 LITERATURE SURVEY 2.1 CLOUD COMPUTING Literature HYPERLINK http//www.blurtit.com/q876299.
html t undefinedsurvey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy and company strength. Once these things are satisfied, then next steps is to determine which operating system and language can be used for developing the tool.
Once the HYPERLINK http//www.blurtit.com/q876299.html t undefinedprogrammers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from HYPERLINK http//www.
blurtit.com/q876299.html t undefinedbook or from websites. Before building the system the above consideration are taken into account for developing the proposed system. We have to analysis the Cloud Computing Outline Survey Cloud Computing Cloud computing providing unlimited infrastructure to store and execute customer data and program. As customers you do not need to own the infrastructure, they are merely accessing or renting they can forego capital expenditure and consume resources as a service, paying instead for what they use.
Instead of running programs and data on an individual desktop computer, everything is hosted in the clouda nebulous assemblage of computers and servers accessed via the Internet. Cloud computing lets you access all your applications and documents from anywhere in the world, freeing you from the confines of the desktop and making it easier for group members in different locations to collaborate. In short, cloud computing enables a shift from the computer to the user, from applications to tasks, and from isolated data to data that can be accessed from anywhere and shared with anyone. The user no longer has to take on the task of data management he doesnt even have to remember where the data is. All that matters is that the data is in the cloud, and thus immediately available to that user and to other authorized users.
Benefits of Cloud Computing Minimized Capital expenditure Location and Device independence Utilization and efficiency improvement Very high Scalability High Computing power How secure is encryption Scheme Is it possible for all of my data to be fully encrypted What algorithms are used Who holds, maintains and issues the keys Encryption accidents can make data totally unusable. Encryption can complicate availability Solution 2.2 EXISTING SYSTEM As data generation is far outpacing data storage it proves costly for small firms to frequently update their hardware whenever additional data is created. Also maintaining the storages can be a difficult task. It transmitting the file across the network to the client can consume heavy bandwidths.
The problem is further complicated by the fact that the owner of the data may be a small device, like a PDA (personal digital assist) or a mobile phone, which have limited CPU power, battery power and communication bandwidth. Disadvantages The main drawback of this scheme is the high resource costs it requires for the implementation. Also computing hash value for even a moderately large data files can be computationally burdensome for some clients (PDAs, mobile phones, etc). Data encryption is large so the disadvantage is small users with limited computational power (PDAs, mobile phones etc.). Consumption of large amount of bandwidth in transmission of file.
2.3 PROPOSED SYSTEM One of the important concerns that need to be addressed is to assure the customer of the integrity i.e. correctness of his data in the cloud. As the data is physically not accessible to the user the cloud should provide a way for the user to check if the integrity of his data is maintained or is compromised. In this paper we provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud.
This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service level agreement (SLA). It is important to note that our proof of data integrity protocol just checks the integrity of data i.e.
if the data has been illegally modified or deleted. Advantages Apart from reduction in storage costs data outsourcing to the cloud also helps in reducing the maintenance. Avoiding local storage of data. By reducing the costs of storage, maintenance and personnel.
It reduces the chance of losing data by hardware failures. Not cheating the owner. 2.4 SOFTWARE DESCRIPTION 2.4.
1 C C (pronounced see sharp) is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. It was developed by HYPERLINK http//en.wikipedia.org/wiki/Microsoft o MicrosoftMicrosoft within its .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 232702006). C is one of the programming languages designed for the Common Language Infrastructure.
Support for internationalization is very important. The ECMA standard lists the design goals for C as C language is intended to be a simple, modern, general-purpose, object-oriented programming language. The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection.
Software robustness, durability, and programmer productivity are important. The language is intended for use in developing HYPERLINK http//en.wikipedia.org/wiki/Software_components o Software componentssoftware components suitable for deployment in distributed environments.
Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C. C is intended to be suitable for writing applications for both hosted and HYPERLINK http//en.wikipedia.
org/wiki/Embedded_system o Embedded systemembedded systems, ranging from the very large that use sophisticated HYPERLINK http//en.wikipedia.org/wiki/Operating_system o Operating systemoperating systems, down to the very small having dedicated functions. Although C applications are intended to be economical with regard to memory and HYPERLINK http//en.wikipedia.org/wiki/Processing_power o Processing powerprocessing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.
2.4.2 .NET FRAMWORK PLATFORM ARCHITECTURE Microsoft .
NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms.
It standardizes common data types and communications protocols so that components created in different languages can easily interoperate. ASP.NET XML WEB SERVICESWindows FormsBase Class LibrariesCommon Language RuntimeOperating System Fig 2.
1 NET Framework Architecture The .NET Framework has two main parts 1. The Common Language Runtime (CLR). 2. A hierarchical set of class libraries.
The CLR is described as the execution engine of .NET. It provides the environment within which programs run.
The most important features are Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on. Memory management, notably including garbage collection. Checking and enforcing security restrictions on the running code. Loading and executing programs, with version control and other such features.
Common Type System The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesnt attempt to access memory that hasnt been allocated to it. Common Language Specification The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined.
Components that follow these rules and expose only CLS features are considered CLS-compliant. THE CLASS LIBRARY .NET provides a single-rooted hierarchy of classes, containing over 7000 types.
The root of the namespace is called System this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object.
As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
2.4.3 SQL-SERVER The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services.
The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services.
The term repository is used only in reference to the repository engine within Meta Data Services SQL-SERVER database consist of following type of objects 1. TABLE 2. QUERY 3. FORM 4. REPORT 5. MACRO TABLE A database is a collection of data about a specific topic.
VIEWS OF TABLE We can work with a table in two types, 1. Design View 2. Datasheet View Design View To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
Datasheet View To add, edit or analyses the data itself we work in tables datasheet view mode. QUERY A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited). 2.
4.4 Jscript JScript is HYPERLINK http//www.webopedia.com/TERM/M/Microsoft.htmMicrosoft s extended implementation of ECMAScript (ECMA262), an international standard based on HYPERLINK http//www.
webopedia.com/TERM/N/Netscape.htmNetscapes HYPERLINK http//www.webopedia.
htmWindows Script engine. This means that it can be plugged in to any application that supports Windows Script, such as HYPERLINK http//www.webopedia.
com/TERM/I/Internet_Explorer.htmInternet Explorer, HYPERLINK http//www.webopedia.