Software Architecture & Scaling

How does the quality of your software build impact your ability to scale?

One might think – as we did, before we knew better – that if you have a piece of custom software or integration and it needs to scale, then it should be a relatively straightforward matter of taking that and scaling it horizontally across numerous nodes. If I can run one application and get 1x unit of throughput on it, I ought to be able to run 10 copies of it and get 10x units of throughput, right? This multiplies the power of the application and poof: I’ve got my scale.

Unfortunately, devils love details, and there’s quite a bit of those when we zoom into what needs to be done.

(Side note: this article applies to Horizontal Scale. If you’re unfamiliar with the difference between Horizontal and Vertical Scale, hop over here.

 

A Tale of a Database Swap

There are any number of perils involved in scaling a sophisticated software system. We’ll carve out a very targeted example here to set some context.

Microsoft Access is a Rapid Application Development (RAD) tool that contains its own built-in database engine. Separately, Microsoft SQL Server is the flagship, enterprise-level database product from Microsoft. Access’ database engine is the little brother, SQL Server is the big brother. Both are very capable, but SQL Server is much more robust.

Many people built Microsoft Access applications and linked them up to a SQL Server database rather than using the built-in Access database. This is a very good pairing, and there’s no reason not to. For its niche, it’s a home-run combo.

Many people also start with the built-in Access database and evolve away from it to SQL Server. This is where we want to focus for our context: migrating from Access database to SQL Server database…

Consider the “theoretical thoughts” (ahem) of a relative amateur Access developer (ahe-ahemmm) considering a move to a SQL Server database:

Hey look at these people using SQL Server as a backend! Awesome! I’ll get 1000x the concurrency capabilities, unlimited storage, robust permission controls, enterprise level encryption, disaster recovery… it’ll be amazing! All I have to do is run this migration tool and turn my little brother built-in database into the big-brother SQL Server database and change the connection strings accordingly. Amazing!

So what does our intrepid would-be-scale-to-SQL-Server developer do? He runs the SSMA (SQL Server Migration Assistant) tool, converts the Access built-in database to a SQL Server database (which is very easy, takes about an hour), then rewires his database connections to point to the new SQL Server database.

Then he fires up the application.

Technically, it works.

But it’s an absolute disaster.

Nothing actually works… forms that took fractions of a second to load take minutes. Reports that took 10 seconds to load still aren’t rendering after an hour of crying in the corner. Concurrency errors galore with more red than a Game of Thrones wedding.

Ahhh, the results of naivety and their ability to macerate our hard-earned confidence. But these are our career-progression learning opportunities, where the seasoning gets applied… the forging fires of… anyway, you get the idea.

Where Did It All Go Wrong?

In the case of Microsoft Access’ built-in database, it’s very forgiving. We can have all sorts of unclean practices when building a native Access app, and it just doesn’t really care. It’s not that we’re lazy, we just didn’t know any better. SQL Server, on the other hand, is far more powerful and robust, but it does care. A lot. To use it, we need to be on our next-level game. We need to be this tall to ride. Database interactions can’t be haphazard as they could be before. Connections need to be handled with care. Forms need to be designed with data fetches as a primary citizen for consideration, not an afterthought. Reports need to consider what they’re actually asking of the database. We need to treat it with the respect it deserves before it submits to our will.

And This Relates to Software and Scaling How??

Ok, so swapping out a database isn’t exactly scaling as we had in mind (although it could be a part of it). We were thinking more along the lines of application architectures, integrations and other general software topics in relation to scaling for large sets of users, multitenancy; things of that nature. What gives?

Here’s the thing: the exact same causes for the failure of the database swap are responsible for the vast majority of scaling headaches.

We can’t take software written by someone who doesn’t know any better and expect it to scale, even though the primary scaling mechanisms are at the infra level rather than the software level.

We have to design software in certain ways so it can interact accordingly with a scalable infrastructure. Certain things need to fit in certain places, need to align in certain ways, otherwise paths get crossed and scaling chokes.

Scalable infrastructures are less forgiving. Done wrong, they won’t scale and can actually make things much worse. It could even be catastrophic (let’s hope we’ve planned our escape routes, else we’ll be spending a lot more than an hour crying in the corner…)

It’s the exact same problem, with the exact same solution.

We have to be this tall to ride.

So, What’s The Answer?

If a Microsoft Access developer with no knowledge of SQL Server writes a Microsoft Access application, it’s not going to scale to SQL Server without a ton of work. However, if someone with SQL Server experience writes a Microsoft Access application, it’ll work in both cases. This of course boils down to experience.

If a solutions architect who has never had to deal with scaling headaches puts together all the pieces of the system, it’s not going to scale well. It’ll take a ton of work to clear away the tech debt required to make it scale. However, if a solutions architect who’s been burned here before puts it all together and ensures various scalability practices are followed, it’ll be much smoother sailing. Again… experience.

Approaches to Scalable Software

Scaling is generally gradual. Rarely do we have to go from 1x to 1000x in a very short period of time (and it can be done, but it’s not typical). Instead, we might go from 1x to 10x, then 10x to 100x, then to 1000x (planning scale by factors of 10 is usually a decent rule of thumb if there’s no basis of expectations otherwise).

If you have the development experience to know what longer-term scale will require during the initial design and build, you wind up with software and integrations that actually can be scaled without costing a fortune in rework and the old-as-the-cloud practice of throwing massive sums of money into more computing and RAM resources hoping that it’ll make it the problems go away.

In reality, this usually happens as a mix of efforts over time. Essentially there’s three ways it can be handled:

Design for scale up front; take the extra time and effort to make sure everything being built\integrated is done so in a way that there’s very little to do in order to scale it later one

Don’t worry about scaling at all when building. This is usually a “didn’t know any better” approach, not an active decision. Either way, the result is the same: when it comes time to scale, you’re going to have some serious work to do for it to happen.

Take a sort of hybrid approach: don’t spend a crazy amount of extra time and effort to make it scalable from inception, but using the experience of people that have been through it before, handle the core requirements and leave doors open to make the scaling transition relatively easy when the time comes.

The default position is to take an approach like #3 above, not trying to design for 1000x scale at the outset (which is a crazy way to do things without some very strong need). This lets you plan and align a bit for the next scale tier, and design the software in a way that can easily enough hit that. Rinse and repeat, so on and so forth. The end result is a highly scalable software and integration package that didn’t risk excessive upfront design costs but was able to smoothly transition from A to B to C to D without major scaling headaches.

It’s a pragmatic, risk-averse, flexible yet balanced approach to scaling.

Pragmatic, risk-averse, flexible yet balanced…. who wouldn’t want that?

Does This Apply To [Fill in the Blank]?

Probably. Most likely. “Scale” is almost as ubiquitous a word as “Agile”, but at least scale has a fairly solid definition. Solid definition or not, there’s all sorts of things that go into the ability to scale.

For most people not deep in software or systems development, scale is basically the ability to handle enough load to keep things moving.

It could be taking a service to market and expecting the general usage rate to increase by orders of magnitude. It could be that your business ops have natural waves and at the end of every quarter you have to process four times as much stuff as you do in the off-months. Not everything needs to be a global unicorn product to require some level of scale.

For the people who actually have to make this it happen, it applies in all sorts of places.

Custom software builds? Oh yes.

Database design? You bet.

Integrations? Yup. (even to third party systems?? *nods sagely*)

Infrastructure design? *flat look of near-contempt* what do you think…

What about… ok ok, I get it. If it’s there, it matters.

Achieving scale is the end-game. Why wouldn’t it encompass everything?

Take this to your in-house dev team of two and ask how they’re going to handle it. Then give us a call.