Coco Alemana
Coco Alemana is a faster way to work with data. It provides data scientists, analysts and engineers the ability to visually interact with their data to clean, manipulate and analyze datasets of any size. It's a code-optional platform, with extensive connections to remote data sources. It's optimized for getting fast insights, at any data size. For more advanced users, they can continue using SQL to extend the capabilities of Coco Alemana's visual-first interface. Users can expect to save 30 minutes, to 2 hours per day on tedious data cleaning and manipulation tasks.
Coco Alemana can be installed in less than 2 minutes, with no prior knowledge of programming, and no data transfers required. Users can connect their credentials and get access to their remote data stores in seconds.
It's also heavily integrated into the operating system, allowing for a high performance, cohesive experience.
Learn more
Adverity
Adverity is the fully-integrated data platform for automating the connectivity, transformation, governance and utilization of data at scale.
The platform enables businesses to blend disparate datasets such as sales, finance, marketing, and advertising, to create a single source of truth over business performance. Through automated connectivity to hundreds of data sources and destinations, unrivaled data transformation options, and powerful data governance features, Adverity is the easiest way to get your data how you want it, where you want it, and when you need it.
Adverity was founded in 2015 and is headquartered in Vienna with offices in London and New York, and currently works with leading brands and agencies including Unilever, Bosch, IKEA, Forbes, GroupM, Publicis, and Dentsu.
Learn more
Lentiq
Lentiq is a collaborative data lake as a service environment that’s built to enable small teams to do big things. Quickly run data science, machine learning and data analysis at scale in the cloud of your choice. With Lentiq, your teams can ingest data in real time and then process, clean and share it. From there, Lentiq makes it possible to build, train and share models internally. Simply put, data teams can collaborate with Lentiq and innovate with no restrictions. Data lakes are storage and processing environments, which provide ML, ETL, schema-on-read querying capabilities and so much more. Are you working on some data science magic? You definitely need a data lake.
In the Post-Hadoop era, the big, centralized data lake is a thing of the past. With Lentiq, we use data pools, which are multi-cloud, interconnected mini-data lakes. They work together to give you a stable, secure and fast data science environment.
Learn more
Data Lakes on AWS
Many Amazon Web Services (AWS) customers require a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. A data lake is a new and increasingly popular way to store and analyze data because it allows companies to manage multiple data types from a wide variety of sources, and store this data, structured and unstructured, in a centralized repository. The AWS Cloud provides many of the building blocks required to help customers implement a secure, flexible, and cost-effective data lake. These include AWS managed services that help ingest, store, find, process, and analyze both structured and unstructured data. To support our customers as they build data lakes, AWS offers the data lake solution, which is an automated reference implementation that deploys a highly available, cost-effective data lake architecture on the AWS Cloud along with a user-friendly console for searching and requesting datasets.
Learn more