Select Page

IT is moving to the cloud.  This trend will continue as the cloud provides too many advantages for customers to ignore: reliability, scalability, and an efficient cost model.  The problem that has arisen from cloud migration is how to protect data in a multi-cloud environment.

The challenges of a multi-cloud environment

  • Due to API/SDKs/Plugins/Connectors/etc., clouds talk to clouds, on-premise servers, and endpoints. How do you keep track of all the interconnections?
  • If all the endpoints, servers, and clouds are talking to each other, what is my weakest security link?

Data security has traditionally aligned with network security.  Defend devices and control the communication paths between devices.  This methodology provides value in many cases such as an Oracle database that accessed only from one on-premise server.  There is no reason the database should accept requests from any other server/IP address.

Managing the magnitude of interconnects becomes a Sisyphean task.  Firewall rules started with a blacklist, allowing connections to anywhere except a defined blacklist.  However, as the blacklist grew faster and faster, the policy logic changed to allow connection only to selected whitelist devices.  Managing by whitelist was an easier task.

Endpoint and network security are requirements to prevent malware, keyloggers, and other malicious code from running on devices in the network.

What about content security?

It’s a typical news headline for data leaks caused by misconfigured Amazon AWS S3 buckets.  Data that should be accessible only to authorized users have somehow been left entirely open to everyone.

How can content security be improved?

Instead of thinking device and network first, try focusing on the data itself.

  1. Automatically identify important/sensitive data. The old paradigm was discover, classify, and protect.  This process always requires users to make manual decisions.  This opt-in methodology is so prone to mistakes that even the most diligent employees will eventually misclassify information.  The alternative is an opt-out approach in which data is protected by default. Employees with legitimate needs are allowed to remove protection from data, and all changes in status are logged for monitoring and auditing.
  2. The content is what needs protection. Legacy solutions focus on files or access to folders/servers.  Legacy solutions have tried to control copy-paste and Save-As functions to protect data.  Files were encrypted when employees needed to share files with 3rd parties.  Encryption only helps protect files at rest and in transit because files need to be decrypted to be accessed.
The best option for data protection is content protection. Protect the content that’s important, regardless of where the files go.
  • A file is just a storage container for data.  Files are encrypted to protect the data.  Files should always be encrypted: at rest, in transit, and even in use.  Files should never be decrypted because that creates a possible data leak.
  • As authorized applications access file data, protection needs to apply to the process as well.  Control the applications ability to copy data to the clipboard, or to share/export/transfer data to other applications/devices/IP addresses.

With a combination of opt-out protection and security that follows sensitive content wherever it is, organizations can ensure that data moving between cloud environments will always be protected.