Top 6 FAQs on Transitioning to the Cloud with Distributed SQL
For decades organizations have relied on monolithic databases to run core business operations, but these traditional relational database management systems weren’t designed to support the requirements of modern application architectures.
Modern tools and technologies allow developers to design and build web-scale microservices and container-based applications, and non-scalable centralised databases just don’t work in this type of architecture. In fact, they become a real limitation, particularly for mission-critical applications moving to distributed or cloud environments. Recently I presented a webinar on how enterprises can move beyond these limitations with a distributed SQL database, and I got some great questions, so I wanted to take some time to share them and the answers.
Q1: Since storage managers (SMs) and transaction engines (TMs) are separate, can they be scaled independently? Do they have to maintain a specific ratio?
You're looking for redundancy in the no single point of failure distributed model. So, ideally you'd want more than one storage manager and one transaction engine as a minimum to provide redundancy. You can scale out as many transaction engines as you need to support throughput and you can have as many storage managers as you like to provide a guarantee of the redundancy that your application needs. If you're running only one TE and one SM, where is the built-in redundancy? There is no redundancy with this model. You need to have more than one of those to ensure that you have no single point of failure. The minimum configuration for resiliency would be two of each. However, there's no specific ratio you need to maintain.
Q2: How do you guarantee ACID properties while maintaining performance?
Just to clarify, ACID stands for atomicity, consistency, isolation, and durability. How do we maintain those ACID guarantees along with maintaining performance? That's a deep subject to get into, and I'll be more than happy to share more knowledge about that. Because we're a distributed database, elements of data are occurring at more than one place in the cluster at more than one time. So a piece of data can be on disc, and it can be in the SM, or it can be in memory, or in more than one TE. Now, we're also a multiversion concurrency control database, and the way that we store data in the database allows us to store multiple versions of a copy of data at any particular time.
So we understand that elastic scale with transactional consistency is critical for any applications migrating from a single-instance database to a distributed architecture. NuoDB builds on logical ordering and Multi-Version Concurrency Control (MVCC). Both systems mediate update conflicts via a chosen leader and allow NuoDB to scale while maintaining superior levels of performance.
There’s an excellent description of the NuoDB internal mechanisms to manage consistency and transactional isolation in the article by my colleague Martin Kysel here: Quick Dive into NuoDB Architecture. Martin also wrote an informative piece about the ACID properties of transactional DDL.
Q3: Do you have a concept of virtual databases that is decoupled from the transaction layer, or is the database dedicated to the entire transaction and storage layer, or can you have separation between the layers?
NuoDB provides a number of ways to support multiple schemas or multiple databases that enable dedicated transactional and storage components. I've included the diagrams below to represent the various models of multi-tenancy (or co-existence) that can be supported within a single domain (numbers 1 to 4) together with a shared-nothing architecture (number 5).
You can see from these diagrams that, with the exception of scenario one, you have the opportunity to dedicate transaction and storage level components to individual databases.
If you're thinking about a multi-tenancy situation where you've got more than one set of users in a single database, you’ll want to be able to ring-fence some of that computability so that different workloads and different users are not conflicting with each other. NuoDB has very flexible deployment options that allow you to do that. So you can do more than separate workloads, you can also separate applications, and ring-fence them and their physical resources.
Q4: How do you integrate data mining technology on NuoDB databases?
NuoDB is optimised for OLTP transactions, and is ideal for systems of record and critical always-on applications. It is also a solution in many hybrid transaction/analytical processing (HTAP) use cases, where the ability to perform both online transaction processing and real-time operational intelligence processing simultaneously within the same database becomes a powerful combination.
NuoDB is not aimed towards data warehousing or data lake management use cases. However, integration between NuoDB and other technologies is very straightforward via a comprehensive set of drivers, APIs, and certified development platforms. Spark and Kafka (among others) can be easily integrated with NuoDB in this way.
Q5: Is the NuoDB database tied to any specific cloud provider
No, and that's one of the beauties of our deployment model. We can happily run on Azure, on Google Cloud Platform (GCP), and on Amazon Web Services (AWS). You can even build a cluster where different components of your database are sitting on each of those three cloud providers. We're totally cloud agnostic.
It's in this way that we preserve your ability to avoid vendor lock-in. If, for example, you are using one of the proprietary cloud databases and they just told you that they are going to increase their prices. In that scenario, your choices would be quite limited. With NuoDB, you can take your database with you. You can move off that provider, you can move from on-premise to cloud quite seamlessly, and you can move from one cloud to another, so it's very, very portable.
Q6: If we're not ready for the cloud, can we start using your database on premise and move to the cloud?
Yes, you can start out building your database on-premise and when you're ready to move to cloud you can stretch that cluster out to the cloud. Then you can start building capacity in one or more of the public clouds so that your cluster now is spread over multiple clouds.
So you’re not limited to just on premise, but you can also move into the public cloud as a hybrid architecture. So yes, you can definitely take your database with you when you're ready.
Ready to scale out your mission-critical applications?
As forward-thinking enterprises move to the cloud, experts are seeking solutions to reduce technical debt while increasing business agility. Watch my webinar on demand to learn more, check out the deck on SlideShare, reach out in the comments, or send a note to firstname.lastname@example.org for more information.