Join us for a one-day event in Austin to talk about optimizing the data layer to scale modern web and mobile apps.
Monolithic databases are so 20th Century. Today teams are using a JSON document store for scale, a graphing database to create connections, a message queue to handle messaging, a caching system to accelerate responses, a time-series database for streaming data, a relational DB for metrics and more. It can be hard to stay on top of all of your options or to visualize how this can all work together.
Why Attend DataLayer?
While much talk in developer circles these days focuses on the app layer, we feel not enough attention is put on the data layer. Data is the secret ingredient to ensuring apps are optimized for speed, security and user experience. We're putting together an agenda that puts a light on how teams are leveraging various data services (and not just the ones that Compose supports!) to architect beautiful apps. Learn how to scale your app to infinity (or close to).
Distributed systems and microservices, automation and orchestration, containers and schedulers and multiple persistence layers oh my ... infrastructure today is a Cambrian explosion of complexity. How is a simple engineer to make sense of it all?? That's where the next generation of observability comes in: engineering your systems to be understandable, explorable and self-explanatory, with tooling that supports very high cardinality and dimensionality, unpredictable schemas, and makes social workflow a first class citizen. Let's talk about what modern tooling for complex systems looks like, and which of the old ways must die.
Application container technologies like Docker and Kubernetes have revolutionized the way in which developers can deploy and manage stateless applications. Containers are quick to launch and make efficient use of underlying compute resources. Orchestration engines like Kubernetes simplify the deployment, lifecycle, and scalability of applications.
Back in 2012 and 2013 MongoDB was the database of choice for many developers to start new projects, since it was the "coolest" database solution available. We’ll explore how a decision like that has an impact on startups, and why we decided to leave MongoDB for good and transition to a relational database.
On the internet, we should expect occasional failure. In this session, we’ll look at how to configure your RabbitMQ installation for different failure scenarios and what your options are in each case. Does it matter if a message goes missing? How about if the queue gets so full the server falls over? Come along and learn how to turn failures into measurable successes.
Although there are still battles to be fought, the war has already been won. Find out how PostgreSQL answers all of your data layer needs. PostgreSQL is a one of the longest standing Open Source Database systems with legions of users leading the way to a sane, productive and performance driven data layer.
This presentation will cover an overview of PostgreSQL technologies including:
- NoSQL capabilities
- Relational capabilities
- Replication & High Availability
- Features you can't believe you lived without
It is increasingly common for organizations to have a backend architecture powered by multiple databases / microservices. While this can help with scalability and fault tolerance, it introduces new challenges when we want to combine data from multiple services in a single view. in this quick talk, we’ll see how GraphQL can be leveraged to join data from multiple databases in a unified way.
Spanner is Google’s scalable, multi-version, globally distributed, and synchronously-replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. In this session we are going to take a hands on look at using Spanner for everyday applications and examine the future of scale-out SQL databases.
Digit's users engage with our service via the Digit bot, who answers questions as pertinent as `what's my checking balance` and `is my money safe` to more humorous ones as `tell me a joke`. After trying a couple of approaches, we landed on a solution that uses past user questions to help answer future questions.
Our bot lives a dual life straddling Mongo and Elasticsearch, but on Elasticsearch is where the real fine tuning of matching questions to answers occurs. The raw content of questions and answers live in Mongo, and we use Node's event emitters to track updates and update Elasticsearch correspondingly.
Recently, the company behind RethinkDB unexpectedly shut down. A couple of team members and community members got together and helped RethinkDB find a new home at The Linux Foundation. In this talk, Christina shares the experience of the company shutting down plus the current state of open source company models and suggestions for open source companies should they find themselves in a similar situation.
With the rise of consumer electronic devices and the IoT, there is a volumetric increase in not only the total amount of data, but the frequency at which it is produced. Traditional ETL data pipelines do not work at the pace of the real world, batch processing is slowing us down. Fast data refers to reacting to an individual piece of data in motion. We can now Stream our data at massive speeds and Filter on demand to be Drained into applications for real-time actionability. Watchful.io is a massive, real time stream processing tool that allows us to work with data at massive scale and speeds to simplify Fast Data architectures.
Not all RESTful backends are as fast as we want them to be. Since we can't own all of them, how can we speed them up? Creating data pipelines is a good way to change the data we get and caching is a good way to speed them up. However, who wants to run all that infrastructure just for a speed increase? Well, now you don't have. This talk will walk through using AWS Lambda or Apache Openwhisk to create stateless, serverless actions that are fast and cached with Redis. I'll cover some of the pitfalls of this approach as well as architectures that can help make it painless and inexpensive to maintain.
Let’s do a live coding look at using GraphQL to provide an API backed by five different data sources. I'll start with what GraphQL is and the benefits to it over REST, and then I'll set up an app scenario, figure out what data we need and what our GraphQL query will look like, and then we'll fill in the backend code that makes database requests (GraphQL resolvers). Finally, we'll query the working database using GraphiQL, the GraphQL IDE.
Databases were traditionally designed to operate on CRUD apps. With the explosion of data in early 2000s, the whole NoSQL movement was born to allow web-scale CRUD operations. However with the emergence of the #realtimeweb, database designs find themselves broken yet again. IoT is fundamentally transforming the data processing needs from hours and minutes to mini and μ-second scales.
DataLayer is being held at Austin's Alamo Drafthouse.
1120 S Lamar Blvd., Austin, TX 78704
Interested in supporting DataLayer? Email us at email@example.com to get all the deets.