Performance

Are Enterprise Systems Only for Large Companies?

You want efficiency and profitability for your business. Oftentimes, that means you need software in place that streamlines your day-to-day business processes. For some businesses, this means enterprise systems. While the term “enterprise” may lead you to believe it’s only for large corporations, it actually can be used for businesses of all sizes. It’s all a matter of what your business goals are and how you can achieve them. 

What Are Enterprise Systems?

It’s important to distinguish enterprise systems from enterprise software. While it sounds intimidating, enterprise software really just refers to software that’s designed to serve the needs of a business as a whole rather than the individuals within it. 

Enterprise systems are a little more complex, as they integrate multiple applications, protocols, and formats to support your business operations. An enterprise system includes everything your business needs, all in one place, so you don’t have to mix and match software.

For example, rather than multiple software solutions for your accounting, inventory software, and customer service needs, you can use one overarching system that includes all of these. Examples include ERP (enterprise resource planning) and a CRM system (customer relationship management). It’s simply an easier, better way to work, especially for businesses that have many moving parts. 

What Are the Advantages of Enterprise Systems?

By integrating all of your data into one system, you only have to enter data and details once and let it propagate through the system, saving time and reducing errors.

Additionally, enterprise systems give you greater ability to analyze data. They can help you coordinate with suppliers, produce high-quality sales forecasts, and manage employee time effectively. You can cross reference data from different departments and areas of the business, allowing you to easily manage the events of your business. 

Enterprise systems also tend to be more reliable than typical IT solutions and easier to secure, which is becoming increasingly important. It’s no wonder large companies use these systems to improve supply chain management, automate routine customer service tasks, and save considerable amounts of money. The benefits are numerous, and particularly helpful for those businesses looking to grow. 

Can Small Businesses Use Enterprise Systems?

Small businesses are often worried about the cost of enterprise systems. As a result, they may either avoid these systems altogether or deploy them piecemeal. Neither is ideal, and the latter typically leads to an extended learning curve as segments are added, reducing the overall productivity of the business. 

Small businesses can certainly use enterprise systems; in fact, there are some developed specifically for small businesses. It is a myth that small businesses can’t afford to deploy enterprise systems, but it is very important to do it right. One solution to this is custom enterprise software, which can be surprisingly affordable.

Custom enterprise systems contain everything you need for your business and can be scaled up as you grow. For example, a startup that is producing one product and sourcing everything from one or two vendors may not need supply chain management software. As a business adds more products, however, it may be needed, and then you can go back to your software developer and have them code an update. 

Because of this, the best way to obtain the exact solutions you need for your business, no matter its size, is to hire a custom software development company. Not only can they help you figure out what you need, but also help you stay within budget while achieving your most important software goals. 

Can Your Business Benefit from an Enterprise System?

Enterprise systems are not just for large companies with high levels of revenue. They’re also a key investment for smaller businesses that can use them, scale them as they grow, and enjoy the significant time and money savings that come with them.

If you’re interested in an enterprise system for your small business, schedule a call with our custom software company. We’ll talk through your needs and help you find the best solution for your business. 

The Hidden Costs of Poor Software Quality

The total cost of poor software quality in the U.S. is estimated at $2.08 trillion. When your software isn’t what it should be, you can expect it to have an impact on your business, but not in a good way. If you find yourself using software that isn’t quite up to par, it’s not a matter of it it will cost you, but when and how. 

8 Hidden Costs You May Not Have Thought About

Your business has to run efficiently if it’s going to make money. So when your software isn’t helping you streamline your day-to-day operations, it’s going to affect the way your business runs, and how much money you’re making (or losing).

For instance, have you ever done the math on how much you lose when a loyal customer or productive team member leaves? Have you considered how much it costs to regularly replace your mix-and-match software programs with other software programs in hopes that they’ll be better? Do you know the true cost of the time you and your team spend trying to implement the different types of software? The answer is much more than you’d think

1. Time

This is quite possibly the largest cost that many people don’t stop to think about. In reality, though, it should be the first cost you think about because it’s going to take the most from you. 

To start, when you know your software isn’t working well for you and your team, there’s usually a conversation about the problem and how to fix it. That conversation is just the beginning, though, and is typically followed by research into potential solutions. 

Once you’ve settled on your new software, you have to purchase it, and then implement it. The process of setting up new software takes time, and during setup, you’re likely to face a host of challenges that could take longer than you planned. When your software is finally installed, you have to train yourself — and your team — to use it. All of those steps are important, but they take time and cost you majorly as a result. 

2. Efficiency 

Not all software is built the same. As such, not all software runs the same, and you may find yourself dealing with disruptive software issues more often than you’d like. This is especially true if you’re using multiple, out-of-the-box software programs instead of software integrations or custom software development. 

It’s hard to find one software program that can do everything you need, but having so many programs means you’ll more often notice slowness, crashes, and functionality difficulties. Some software programs have vulnerable security, too, which means your business is at risk of being compromised. According to Gartner, the average cost of IT downtime is $5,600 per minute. Most businesses can’t afford this, and they shouldn’t have to; but many do, without even realizing it.  

3. ROI 

When you get new software, you expect a relatively quick and noticeable change in your ROI. After all, software is designed to ultimately help your business be more efficient and profitable. However, installing and learning new software takes time and often begets disruption of business operations that reduce the overall value of the software.

When your software malfunctions, it reduces your team’s productivity, which in turn reduces your ROI. If it continues long enough, you won’t just notice how ineffective the software is, but how much it’s affecting your team and your business as a result. The initial hopes for positive, fast ROI quickly turn to disappointment and frustration when you realize you’re getting delayed ROI — or worse, no ROI at all. 

4. Outcome

Unless your software is custom-made for you, then it’s not going to do everything you want it to do. More often than not, the functionality of poor software cannot meet your needs and expectations, so you don’t get the outcome you were looking for when you decided to buy it. 

Software can’t fix everything. In fact, it can’t even do everything, especially if you opted for out-of-the-box software programs that aren’t designed for your business needs. When you choose pre-packaged software, you’re also choosing undesirable outcomes, like unhappy team members, reduced productivity, mix ups, disruptions, and — in extreme cases — the downfall of your business. 

5. Goals

Successful businesses have both short-term and long-term goals they plan to achieve. Without the right tools and processes in place, like software that streamlines your workflow, you’re going to have a hard time meeting those goals.

Every business needs resources in place to help them achieve their goals, such as expansion and increased market share; good software is just one of those resources. Without the right software programs, you won’t just have trouble meeting your current goals and business needs, but also setting your business up for future growth and success. 

6. Productivity

Software solutions are supposed to help businesses enhance efficiency, improve speed, and increase revenue. However, software that’s frustrating and difficult to use does the exact opposite, greatly reducing productivity and profits as a result. 

Unless you switch your software, upgrade it, or invest in custom business software, your team will continuously struggle to complete typical daily tasks, and your business will bear the brunt of it. 

7. Opportunity 

When you’re looking for business software, you’re really looking for a better way to work. When you select poor software, not only will you miss out on a better way to work, but you’ll also find yourself facing other consequences that affect your business on a daily basis. For example, your team will likely spend a significant amount of time trying to work around the associated deficiencies, wasting time, losing money. 

The right software will produce opportunities for immense success; poor software will take that opportunity away from you while adding other stressors your team and business shouldn’t have to worry about. 

8. Peace of Mind

In every role, at every business, there are daily concerns and to-dos. When you’re trying to get your business to run more efficiently and effectively, you can’t waste time or money on tools or processes that keep you from doing so. 

Poor software means you have to worry about whether your software is working, your team is able to do their jobs, and if your business can run the way it needs to in order for you to be successful. Good software gives you the peace of mind that comes with knowing your business is running smoothly, like it should, so you can focus on what matters most: a profitable and successful business. 

Upgrade Your Business with Custom Software Solutions

At Dymeng Services, we understand how frustrating software can be for businesses, which is why we provide durable business software solutions that empower you to work smarter and increase efficiency. To avoid the hidden costs associated with poor software choices, we recommend investing in your business with software that will work for you. 

To learn more about custom software, or to find out how our team can help your business run more smoothly, schedule a call with our team today. We’ll help you find the software solution you need so you can get back to business. 

Database Performance for Non-Databasey Devs

I had a junior dev (primarily front-end web) work on a site for me that included a couple calls to the underlying database.  Upon reviewing the work I decided to make a change to the way that some of the DB calls were handled.  Below is the explanation of why I made those changes…

 

So I’ve been bad at staying in touch (as usual) and was supposed to give you a rundown on why some changes to the DB reads for the comments makes such a big difference in performance.  Rather than waiting for us both to find time to do something not really required, I figured I’d at least give you a rundown via email…

(note: these apply to pretty much all relational database systems: MySQL, SQL Server, PostgreSQL, SQLite, etc)

A good relational database engine is one of the most impressive components of any computer system I’ve ever seen.  The amount of work that it goes through, potentially millions of rows of information that must be processed, and that it can still get your answer in a fraction of a second is phenomenal, in my book.

A few foundation points to remember when working with data:

  • Always think about where the data is coming from.  When you’re SELECTing from somewhere, think about what tables are involved and the scale of those tables.  Also consider what indexes you’re using, and whether any columns that you’re filtering and/or sorting on are against indexed columns.  Selecting columns to return is more or less irrelevant in terms of indexes, but whatever’s in the WHERE, ORDER BY or JOIN/ON are columns that the database engines use to search and define which actual records will be returned/joined, etc.  The rest of the cols in the SELECT list aren’t resolved until *after* the engine has identified the required rows via WHERE/JOIN/ORDER.  Thus, the contents of the SELECT list are more or less irrelevant for performance considerations.  Make it a point to have a list of indexes handy for a given database if you’re going to reading from it a lot and you don’t have someone managing the data access for you.
  • Disk reads are extremely slow.  Memory (cached) reads are extremely fast.  Most databases do not store data in memory (it’s an option, but quite advanced and not relevant for this discussion).  Some databases store all tables in a single file (SQL Server, for example, uses two files: one for the data, one as a transaction log, where MySQL stores a separate file for each table (I think…))  Plan on every new SELECT statement having to perform a read from the disk.  It won’t always be the case, but it will be enough to plan on it.  Disk reads are like 80% of the time it takes for the entire process: it’s significant.
  • Establishing connections is slow.  If the application/server setup doesn’t support connection pooling (eg, keeping a cache of open connections to the database that can be used at need), then each and every call to the database needs a connection to be created.  There’s various steps involved in this (and some are bypassed based on certain cached connection stuff depending on the platform/engine), but generally speaking the slow part is that the core DB engine needs to be found, then the authentication handshake needs to be performed (allowing the request into the database itself), then the permission credentials of the requester need to be checked against the objects that are being queried once the request is “inside” the database.  Only then can the database handle the request and return the records (this description is very distilled and the process changes much between engines: MySQL is very rudimentary compared to SQL Server in this respect (SQL Server has an excellent security model, but that also means complexity)).  In rough terms, let’s figure that if a disk read can consume 80% of the round-trip time, this connection establishment and authentication can consume 17% on top of that.
  • Multiple requests are slower than a single request.  If you can get the information in a single select query, you’re typically better off than if you were to use two separate queries (for each query, at runtime, the database engine will generate an execution plan based on the requirements, table sizes, tons of factors really, but the point here being that there’s some overhead for each new query.  Avoid it where reasonable (most times you’ll just do what makes sense from an application programming point of view in regards to this point, but it’s worth mention anyway).
  • The real power of a database engine shines when it can use Set based processing (as opposed to procedural processing).  This tends to be a bit of a crux to most, because almost all other programming (be it procedural or OOP, whatever), is *not* set-based.  Procedural processing in a db is referred to as RBAR: Row By Agonizing Row.  It’s insanely slow compared to set based processing.  I’ve used a crate of blueberries as an analogy before: if you have a crate of 1000 blueberries and you need to move it from one bench to another: set based processing is like picking up the crate by the handles and moving it to the other bench.  Non-set based is like picking up a single blueberry and putting it on the other bench, x1000.  I won’t go into too much detail here about set based vs. not set based, but generally speaking if you stick to basic commands/statements in SQL statements, you’re typically set based.  Procedural processing tends to come in when running stored procedures and calling custom functions.  A full dissertation of this topic is well beyond the scope of this email.
  • Think about the process required for each call to the database you make.  Will it have to read from a disk? Will it have to authenticate my request and resolve permissions? Will it have to regenerate multiple execution plans in order to get me all of the information I need to ask for?  Am I telling it to discern rows and join on key columns that have indexes (think library dewy decimal system), or will it have to look at Every. Single. Row.

Ok, so there’s the basics.  It’s a lot to keep in mind, but more or less just remember that there is a specific set of things that happen: the engine doesn’t just magically return values when you ask for them.  By keeping in mind the general idea of the process and some of those key points, you’ve increased your performance awareness factor times and times over again.  Let’s look at some examples.

The task was to come up with a count of comments for each post from within PHP.  A list of the posts was already generated and entered into a loop via PHP so the posts could be echo’d back to the browser, so at some point previously we already had to consult the database to get this information.

  •  Get list of posts from database
  • Enter loop to generate markup for each post

Now, to get the comments, you created a call to the database from within the loop to count the number of comments:

  •  Get list of posts from database
  • Enter loop to generate markup for each post
  • Make another call to the db to get the count of comments

Keeping in mind the list above, we can see that a number of key points have been violated: disk reads, connection establishment and query execution processing are running over and over again for each iteration of the original results.  That’s a TON of overhead, x50 (assuming the post count batch returns 50 records per call), and the page isn’t even starting to display until *after* all that is done (if it were ajax, it’d be a bit less of an impact on the UI end, though still a bit of unnecessary abuse against the database engine (imagine a couple thousand people visiting the site at once, pulling up the main feed, so take that lots of overhead x50 x1000…))

Essentially, by placing a separate database call into the loop, you’ve created a procedural style use of the database instead of a set-based… for each record returned, the DB has to go find other records based on it, but it can’t use its internal processing prowess to do so because these calls are separated at the application level.

 

What do we really need to do?  For each post, count the number of comments.

SQL can do that.  So if we add it as part of the original call (from which we’re returning specific posts anyway), we can get away from that x50 (potentially x50 x1000) extra information, merge it with an existing call, minimize disk reads, pull it all through one request, all by adding one join to the comments table and one aggregate (count).  MySQL handles this additional request in fractions of a second longer than it originally took, whereas before, for each individual call from within the loop might have taken near to 1 second alone, meaning that for 50 posts, you’d have a long time staring at the screen until the page showed up.  But now, we’re able to get *all* of that information (individual posts as well as the count of comments for each post) in around 1 second or so.

Here’s a generic SQL statement for getting a count of a related table:

SELECT parent.ID, COUNT(child.ID) FROM parent INNER JOIN child ON parent.ID = child.ParentID GROUP BY parent.ID;

Note that only two tables are present: parent and child.  Further note that the join columns (the key ones used to discern how records will be further processed and ultimately returned) are ID columns that are likely to be indexed.  There’s not fluff: no unnecessary tables, no unneeded joins, no filtering/sorting/joining on non-indexed columns.

Let’s take a look at the call that was set up to run from each iteration of the post generation loop:

function getPostComments($postID, $categoryID) {
    // not used anymore, included the comment count as an aggregate of the main feeditem rather
    // than calling a new database read for each item
     $html = "";
     $db = dymeng\DatabaseFactory::GetDatabase();
     $result = $db->Select("
           SELECT c.id,
                c.author_id,
                u.user_display_name,
                u.school_id,
                u.school_name,
                c.comment_content,
                c.comment_date
           FROM content_comments AS c
                INNER JOIN (
                     SELECT t.id,
                           t.user_display_name,
                           ts.id AS school_id,
                           ts.school_name
                     FROM users AS t
                           INNER JOIN schools AS ts ON t.user_school_id = ts.id
                ) AS u ON u.id = c.author_id
           WHERE c.content_id = ?
                AND c.content_category = ?
                AND c.comment_status = 1
           ORDER BY c.comment_date;"
                , $postID, $categoryID
     );
     if (!$result) {
           // no comments
           $html = "0";
     } else {
           $html.= count($result);
     }
     return $html;
}

Ok, it works, but aside from the “inside loop” problem, there’s a lot of fields and tables in there that simply aren’t required to get the information needed.  While JOINs can be pretty quick, they’re not free by any means (really, all that’s required is the comments table and the id/category, which we have as part of the original call anyway, so we didn’t need any joins, or even any other columns than our generated count).

 

Let’s take a look at what the original “get the posts” request was, with the new functionality added for the post count handling:

        SELECT
            fi.id,
            fi.feeditem_category,
            fi.feeditem_school_id,
            fi.feeditem_entry_date,
            fi.feeditem_username,
            fi.feeditem_user_display_name,
            fi.feeditem_title,
            fi.feeditem_content,
            COUNT(cc.id) AS comment_count
        FROM content_feed_items AS fi
        LEFT JOIN content_comments AS cc
            ON cc.content_id = fi.id AND cc.content_category = fi.feeditem_category
        WHERE feeditem_status = 1
            $categoryClause
           $schoolClause
        GROUP BY
            fi.id,
            fi.feeditem_category,
            fi.feeditem_school_id,
            fi.feeditem_entry_date,
            fi.feeditem_username,
            fi.feeditem_user_display_name,
            fi.feeditem_title,
            fi.feeditem_content
        ORDER BY fi.feeditem_entry_date DESC
         LIMIT " . ((int)$startPage * (int)$pageSize) . ", " . (int)$pageSize;

So, above I’ve added (per bold) the left join on the comments table and the join columns, both of which happen to be indexed.  Furthermore, in order to return non-duplicates of the posts and get the aggregate (count), I added the group by clause (per italic).

Now, one might look at this and think “omg that’s going to be so slow because now it needs to group all those records and fields together because we added an aggregate and it didn’t have to before and now the SQL statement is much more complex than it would have been just to count the comments separately from inside the loop!”   True, it looks more complex, and true, it does need to group everything now where it didn’t have to before, but for the most part that’s piddly stuff to a database engine: it’s still operating set based (so grouping is uber-fast), and it’s still just one call, one disk read, one connection and one execution plan.

As a perspective reminder: we were working with what’s essential test data and 15-20 posts.  The full page generation time after my modification is somewhere between 1-2 seconds.  Before my modification is was somewhere between 6-8 seconds, and would have gotten worse as more posts were returned (and would have had a significant load on the server which would have bottlenecked even more if there were actual users making requests en masse).

And that’s why I made some minor adjustments to how the comment counts were being generated 🙂

I think that’ll give you some new light to see while you’re making requests of databases.  I’m very much a db guy myself and don’t expect general application programmers (*especially* FE programmers) to have quite the depth of understanding as I do, but if you walk way from this with the general idea and a few key concepts, that can make a world of difference and then all that typing was worth something!

Cheers,

(P.S.: I’m going to turn this into a blog post…)