Are Your Data Systems a Technology Tar Pit?

Data Systems a Technology Tar Pit

Are you still running yesterday’s database? Technology and workloads have changed, so it might be time to re-examine your trusty standby. But how do you determine whether the benefits of new data storage technology are worth the cost and effort of evaluating and switching?

Perhaps you think your current data retrieval system is “good enough,” but that “just fine” system might be creating huge technical debt that could come back to haunt you. To keep up with the times, you might need to re-evaluate your database. This could result in a leaner, meaner, and more modern back end that can grow with your business and save you money.

Technology changes rapidly, and data storage technologies are no exception. Innovations such as solid-state drives are creating entirely new sets of tradeoffs and advantages. Meanwhile, the sheer amount of data being collected continues to explode. Older solutions simply cannot find good footing in this environment. Newer databases have an unfair advantage because of efficiencies like optimized algorithms, file formats, and data structures.

When Is ‘Good Enough’ No Longer Good Enough?

Even if your old storage system is still chugging along, it’s likely to get left in the dust by newer, faster systems that competitors are already taking advantage of. 

When exploring an update to your data retrieval back end, consider the standard big data trifecta: volume, velocity, and variety. If your database is struggling in one of these three arenas, it might be time to dig deeper and see where the overarching problem lies. But how do you know whether your business has outgrown its data storage system?

The first step is to measure your database’s workload. Workload characterization should be the foundation of all your database decisions. It’s no different from what you’d do in any other area of your business. Think about how you’d optimize your sales efforts; if you wanted to figure out who your most promising customers were so you could find more of them, you’d characterize them by segmentation: industry, size, revenue, and so on. Similarly, if you want to understand the most and least productive uses of your database, you need to categorize and slice and dice the work it’s doing. You can’t make good decisions otherwise.

Unfortunately, many databases don’t provide data on their own workload and query performance. It’s a bit of a catch-22 because many older databases are mature enough to provide that data, but newer ones often aren’t. This can sometimes mean that you have enough information to decide that your older database is struggling — but not enough to decide which newer ones are well-suited for handling that workload. Redis and MongoDB, for instance, don’t have query-level instrumentation of the type you can get from Oracle’s system views.

Isn’t There A Tool For Analyzing This?

Measuring and characterizing workload is one thing, but making decisions about whether a database is the wrong solution and evaluating a replacement is entirely different. Unfortunately, as far as I’m aware, there’s no automatic way to do this. Sure, you can get help with some of the tedious parts of capturing and characterizing the workload, but using that data to make decisions is (and probably will remain) a task for expert humans.

It’s such a complicated task because the decision space of options, tradeoffs, and consequences is enormous. It encompasses the application, queries, schema, indexing, and data distribution (cardinality and long-tail versus fathead). With that many factors, you’ll need to bring in your best minds to help you make the decision.

That’s where your database architects and database administrators come in — as data storage solutions only become more complex, these tech professionals will become your best friends. They’re hopefully freed from daily drudgery and reactive firefighting so they can help with strategic decisions like this. If not, that may be the root of your troubles — you might need to get your DBAs out of the weeds before you can make progress.

Consider Your Technology Surface Area

How many technologies should you use in your data tier? This is a tricky tradeoff. I like to think of it as the “technology surface area” of your stack. I personally like to keep my surface area small. This leads to fewer moving parts and less complex operations strategies for your whole team. On the other hand, I know that we’re jamming square pegs into round holes in some particular areas. It’s certainly an area for exercising judgment.

However, it’s also clear that “polyglot persistence” — the trend of using several different database technologies, each in their proper place, to take advantage of their unique strengths — is real and here to stay. Most modern applications are built on three or more kinds of databases: relational, document, columnar, key-value, search, etc. In the past, we would just use a single giant Oracle or Microsoft SQL server instance for this, but that’s clearly no longer the right tradeoff.

When considering databases, I think it’s important to find a middle ground between the newest advantages of cutting-edge solutions and code that’s proven in the real world. It takes a long time for a database to become stable and production-ready in the wild, no matter how well it may seem to work in specific cases.

I’d suggest looking for a mature solution that can handle many workloads, not just yours specifically. The most flexible and mature databases already have the code kinks worked out, and they provide those little last-mile features and capabilities that allow the database to adapt alongside your business.

What’s The Alternative?

I truly believe that you should keep things fresh by taking calculated risks, especially with your data tier.

There’s no question that this in-depth evaluation of data retrieval technology is a large undertaking that requires attention and expertise. But the alternative should worry you. Few parts of the application and architecture become ossified like the data tier. It tends to be an extremely problematic part of the technology stack, implicated in or responsible for a lot of performance and availability problems.

The instinctual response to this is to “manage it better” by isolating it and treating it specially by enacting restricted access, specialized teams, and change control procedures — by micromanaging it, in other words. Unfortunately, this has the opposite of the intended effect: It grows organizational scar tissue and creates silos and communication bottlenecks between teams and people. And that exacerbates the very performance and availability problems you’re trying to solve while decreasing team velocity and software quality.

You can only address these problems by tackling them head-on, just as Netflix unleashes its Chaos Monkey to destroy systems randomly in a highly successful effort to make them more resilient. This would be absolutely unthinkable to most IT managers, yet it’s the right thing to do.

Similarly, the counterintuitive but often strategically sound thing to do about your database problems (which turn into problems for IT overall) is often to get your thumbs into the scar tissue and massage it apart. It may hurt, but it’s good for your company in the long run.

What does this mean in the realm of databases?

It means you need to stay abreast of the latest developments in database technology and keep migrating forward at a measured pace, or you’ll end up stuck in the La Brea Tar Pits of old technology and find yourself unable to get out.

It’s much easier to say than to do, but if you do it, you’ll reap the benefits. And if you don’t, you’ll suffer the consequences.

Baron Schwartz, founder of VividCortex, is one of the world’s leading experts on MySQL, and he has helped build and scale some of the largest web, social, gaming, and mobile properties. His award-winning tools are used by tens of thousands of large MySQL deployments, including Facebook and Twitter. His book “High Performance MySQL” is widely regarded as the definitive reference for MySQL.

More about: database, data

Recommended Posts | Network Management

Automated Monitoring

Automated Network Performance Monitoring

In the recent past, there has been a substantial increase in deployment and upgrade investments with a focus on the automated monitoring and management of networks. It is because the troubleshooting process of problems associated with Network Performance can easily be streamlined by the purpose ...
 Optimize Your Company’s Firewall

How to Optimize Your Company’s Firewall

Your company should follow the 11 best practice rules for configuring your firewall to optimize performance and protect network. When configuring your network’s firewall, it’s recommended to use general best practices. This protects your business’s network from hackers or security breaches ...
Recommended Security and Privacy Extensions

Highly Recommended Security and Privacy Extensions (Firefox & Chrome)

Knowing about the best protection for computer is critical if you're connected to the internet. Google Chrome and Mozilla Firefox are the safest internet browsers. Most people use one or the other as their default browser, which helps to provide them with superior privacy and protection extensions ...
Network Monitoring System

Network Monitoring System for Instant Reporting

The computer network is a complex system and faults do pop up in spite of best design and equipment. The human error needs to be monitored online to rectify the problem immediately. Important links to ISPs or critical servers, security equipments need to be monitored and reported immediately ...