Monthly Archives: February 2015

Beware the Spreadsheets from H*ll…

As IT consultants we are constantly looking at organizations and their IT business processes.

Anytime I see Excel spreadsheets a shiver goes up my spine.

Think of the time that someone compiled data from “this report and that report” and made an Excel spreadsheet.  This happens all the time, in every organization.  HOWEVER, how did the creator of this report validate the data used?  Were the calculations done correctly? And, more importantly, once this report is published, and requested on a frequency – say monthly – who’s responsibility is it to maintain this manually created report?

Although spreadsheets allow an individual user to create their own data models, estimates, budgets and tracking forms, they also can be the source of catastrophic business failures.

I can remember in one of my first IT jobs, where I was in charge of tracking revenue data and creating reports.  Excel was my boon and bust.  I created massively complex and comprehensive spreadsheets that allowed me to cost analyze the whole business model.  But one error in one formula, that was accidentally copied to an incorrect array of cells, threw the whole model off kilter, but not enough to be glaring and obvious.

The error was ingrained the model for months, and it wasn’t until a new controller came on board did we identify the calculation error.

spreadsheet

Spreadsheets are great for one-time data modeling and calculating.  Even for doing a “what-if” projection for your backyard landscaping project.

But once introduced into the “business” process, especially if it’s anywhere in the cost accounting workflow, you are opening your business up for major self-inflicted wounds and catastrophic failures.

All efforts should be made to eliminate the following data types in your organization’s business processes:

1) ANYTHING HARD-COPY – if your business is dealing with hard-copy within your cost accounting and business processes, care must be taken to standardize and control the conversion from hard-copy, to source electronic data.  Meaning if someone has to key-in data from hardcopy, controls must be in place for user authorization, data access and constraints.

2) ANY USE OF SPREADSHEETS – if your organization is using any type of spreadsheet for mission critical or business critical data, then you have NO ASSURANCE OF DATA INTEGRITY.  No matter how much you lock-down, password, and idiot-proof your spreadsheet, someone smart still has to convert the data into your business application and that’s where the errors can creep in.  Think about how often you use the UNDO key when you are spreadsheeting….One too many Ctrl +Z, and you’ve recalculated and undone your formulas who knows where else in your spreadsheet!

3) ANY USE OF VERBAL COMMUNICATIONS – If anyone in your organization enters data into your business applications directly from a verbal command or order, the application must have maximum data selection control and data type constraints.  AND your system should also seek a secondary, verifiable data type that you could go back to for check-sum.  For instance, if order come verbally over the phone, a tally sheet should be used to be able to log data entries, time-stamp, and archive.

Don’t let your spreadsheets be the source of your own spreadsheet h*ll!

Tech platforms go full circle…here we go again!

I’m always amazed at the way technology systems and architectures seem to go full circle – remember centralized computing, then client-server, then data center virtualization?  In this example, we went from centralized main-frame computing, to distributed client/server models, then back to centralization at the data center.

fullcircle

Do you know what the old-school mainframe MVS operating system stands for? Multiple Virtual Storage…That’s right, they were creating virtual machines way back in the 60’s.  That was the first computing platform where they were able to separate hardware computing using a software abstraction layer – creating multiple virtual machines, logically independent of central processing hardware.

This time we’re going from the old dedicated, stand-alone server, with it’s dedicated CPU, memory, and hard drives, to chassis-based computing and SANs and NASs, right back to dedicated computing and storage.  And some organizations haven’t even purchased their first SAN (or NAS) yet.

Get ready for Hyper-convergence.

It’s important to note, that technological advances, and systems complexity enable these new models and architectures.  Even though they harken back to past architectures, they are fundamentally different because of these advancements.

Because of the complexity of chassis and appliance-based computing, switching fabrics, and the myriad of SAN storage options, static memory, umpteen spinning disk options, and the complexity of provisioning this massive array back to processors on the other side of the fabric, a new architecture is now being embraced as a more scaleable and efficient platform.

Hyper-convergence is the integration of x86 multi-core processors directly with memory, and static and HD-based storage, creating a Black Box with dedicated computing and storage resources.  By integrating these two components together, the dedicated computing to storage relationship effectively eliminates the fabric networking component – the main bottleneck in SAN-based compute/storage models.

This new appliance-based computing and storage model allows very high density computing to be scaled directly with it’s own dedicated storage.  Compute and storage capacity scales simply by adding more black boxes.

That doesn’t mean that there are no provisioning or scaling issues, just that some of the complexity and costs associated with these large-scale SANs and the switch-fabric bottlenecks are eased.

And because of processor and storage density, we’re talking about compute and storage appliances with up to twenty – x86 core processors with 1.6TB of Static storage and another 16TB of spinning disk – all in one 2U (two rack units) box.  Of course, these numbers will continue to spiral upward.  AND they’re not cheap.  But what is the cost of reducing complexity?  When it means eliminating bottlenecks, and single-points-of-failure (think switch-fabric), then the cost is much easier to justify.

So, before upgrading your SAN platform, better see if Hyper-convergence is part of your future.