When scoping a project for feasibility, usually there’s two approaches you can take: either jumping straight in with a “sounds feasible” approach, or taking some time to prototype and get some really accurate data.
Both can have their pros and cons; with the former, you can learn as you go, but often times your guesses can unravel as a clearer picture of the problem emerges, while the latter will give you better results at the cost of some up-front time.
Is there a middle ground that allows you to give answers to feasibility questions off the cuff?
Getting to “no”, fast
I have found that by remembering a few numbers you can quickly come up with an answer to the feasibility question.
- one thousand seconds is about 17 minutes (round up to 20 minutes, or 1/3rd of an hour, for easier math)
- one million seconds is about 11.6 days (or round to 12 days)
- one billion seconds is about 31 years, 256 days (at this scale, you can round up to 32 years)
By doing some simple multiplication in your head you can quickly sanity-check requests you get from your clients and give a swift answer whether something is doable, impossible or needs to be more thoroughly considered.
The idea behind using these rules of thumb is to be able to tell on the spot if an idea is completely infeasible or not, without having to invest time in a super-accurate estimation early on.
Let’s do a quick case study. You (yes, you in the back there) are a partner in a small startup. Your company’s core product is a clever little website-monitoring SaaS application that checks a site’s status and sends automated warnings to the clients if it finds anything amiss on their sites. You’re the head of tech, programming and implementation; your buddy (and boss) is great at business management but not so savvy on the technical nitty-gritty. Here are some situations you might run into.
Small Task (Contacting Customers)
Your burgeoning startup has finally hit 1,000 customers. Congratulations! Your boss wants to send a personalised thank-you email to each one. Can you do it within a week?
For each one, if you’re fast to find their email address and jot out a quick message, maybe you can do it in a minute per customer. We know 1,000 seconds is 17 minutes, so 1,000 minutes is 17 hours. So yes, if all you did was write out messages for nearly three work days, you could get it done.
If that doesn’t sound like fun, then perhaps you could consider parallelising the operation by giving some of the work to co-workers. (This is why companies hire interns.) Or maybe it’s time to write a script to automate the messages. They won’t be as personal, but it would save you quite a bit of time.
Medium Task (Website Monitoring)
Your website-monitoring SaaS app is taking off, and you now have a million websites that your customers want monitored. Is it feasible to monitor each site once an hour?
It might take a second or more to check a site’s status, so we can quickly say that it would not be feasible to check each site within an hour. Running the checks sequentially would take a million seconds, or 12 days. Even if each site were quick and returned data in half a second, that’s still six days.
At this point, if you wanted to offer hourly checks, you might decide it’s time to start offering different customer service tiers, with some paying more for higher SLA and more frequent checks.
Alternatively, look at other methods. By our rules of thumb, parallelising the requests to check 1,000 sites per second (perhaps by using multiple servers) could see the task run in about 20 minutes.
Large Task (Log Analysis)
By now, your company has it pretty much made. You’ve been running a long time; you’ve managed to collect 1,000 log messages per customer site, or about a billion in total. Is it feasible to aggregate the information to find out average page-load times, and can you get it done by tomorrow?
You can probably guess where this is going – can we process that many records in time? If it takes a second per record, then maybe the next generation of programmers can look at the results.
With a more optimised script, you may be able to process 1,000 logs per second, and then we’re back to a 12-day wait. In any case, it’s unlikely you’ll get a result by tomorrow.
At the “billions” scale, I believe it is worth investing some time to figure out how long a process is likely to take, as going from 2 milliseconds to 1 millisecond over a billion records represents a saving of 11.6 days. (Hey, that number sounds familiar!) This number only gets bigger, though; after all, you won’t be running that process only once, right? Think about how many years of processing time can be saved by taking one single day to find a better answer.
Faster per-record processing using a different language or datastore, along with parallelisation, can yield immense results when multiplied out.
At Tera Shift, we want to get you to “yes”. But sometimes we have to say “no”, if it’s just not physically or technologically possible to get the task done fast enough.
However, we will work with you to suggest ways in which you can speed things up and get to that “yes”. Even if that means you may not be able to do a billion things per second.