From a review of the rather powerful Fair Isaac Blaze Advisor, which will surely be far less successful than its functionality deserves:
But employing a usability expert when designing the tools and observing how users interact with them would go a long way toward improving their usefulness.
My mind utterly boggles each time I discover that a large software vendor still doesn’t seem to have realized this. Or maybe Fair Isaac did do usability engineering, but entrusted it to a blithering incompetent. That frankly would be more reassuring than them not having tried at all.
“Decision support”, “information centers”, “business intelligence”, “analytic technology”, and “information services” have been around, in one form or other, for 35+ years. For most of that time, there have been two fundamental ways to sell, market, and position them:
- Access to information
- Application software
More recently – especially the past five years – there’s been a third way:
- Infrastructure upgrade
as early-generation implementations get replaced by newer ones.
At the 50,000 foot level, here’s some of what I see going on:
- Classical BI marketing is floundering. BI vendors don’t know whether they’re in the business of quick/easy information access, analytic apps, or better-enterprise-system-software.
- A few areas of analytic application are being packaged and marketed well, with solid business-process stories and good customer acceptance of same. The biggies are budgeting/planning and CRM analytics. On the whole, however, analytic apps are floundering, or else are little more than reporting front-ends on operational systems (e.g., in network management).
- Data warehouse software startups are on a roll. Especially at the high end, this is a pure infrastructure-upgrade business. There’s plenty of room still for improvement, but multiple vendors each are doing good jobs of marketing on the basis of:
- Speeds and feeds
- Ease of deployment
- Ease of administration
- Data integration is mainly an infrastructure improvement play. After all, that integration COULD be hand-coded. Automating the process is usually a better-infrastructure story.
- Text search is still an information-access story. There are multiple niches where search is booming. But in all cases the story is information access. Evidently the technology and/or market aren’t mature enough yet for strong infrastructure stories. And in the limited cases where text search gets integrated into general application software packages, it’s usually just for information access rather than a real business process.
- Data mining and predictive analytics are mainly information access plays. Yes, the information being accessed is calculated rather than raw. Yes, I believe that the heart of the data mining market is continuous process improvement. Even so, what users buy from the vendors is usually little more than information toolkits.
- Text analytics is mainly an information access play. Text mining and information extraction have two main uses right now. Either they resemble – and indeed often feed into — data mining, or else they are used to enhance search and search-like document access.
- Information services have always been an information access play. When you think about it, the financial-quote-machine business is a huge part of the whole decision support market. Lexis/Nexis is no slouch either. And they’re all about providing information access.
- This three-headed taxonomy of strategies is similar to one I previously postulated for Microsoft, SAP, and IBMOracle.
- I covered analytic business processes at length in a November, 2004 white paper. Unfortunately, industry progress since then has been relatively slow.
- I’ve written voluminously about data warehouse software startups on DBMS2.
- One example of infrastructure focus is the ease-of-deployment trend.
- Web search and generic enterprise search aren’t the only search areas to focus on information access. (And yes, they’re most definitely separate areas.) Even customer-facing structured search does; the information is just tailored according to different criteria.
Business intelligence (BI) used to be characterized by speed and cost-effectiveness — short sales cycles, low-cost departmental purchases and deployments, evasion of IT departments’ strangleholds of data, and so on and so forth. That focus has blurred, as BI vendors have increasingly focused on analytic applications or enterprise-wide standardization sales. But increasingly I’m seeing signs that the pendulum has swung at least partway back. For example:
- Business Objects and Netezza have announced a mid-range BI appliance.
- Ingres is headed in the same direction.
- QlikTech is enjoying great growth for its fast-deploying BI technology.
- KXEN and Verix offer “easy” data mining technology.
- Search-based BI is trying to circumvent the data warehouse deployment process.
It’s about time.
|Categories: Analytic technologies, Business intelligence, Computing appliances, Data mining, DBMS vendors and technologies, Usability and UI||1 Comment|
It is becoming ever clearer that dashboards aren’t working out too well, any more than predecessor technologies like EIS (Executive Information Systems) did. The recurring problem with these technologies is that if they’re mind-numbingly simple, people don’t find them very useful; but if they’re not, people are overwhelmed and still don’t find them useful. This column by Sandra Gittlen does a good job of spelling the problem out.
I think there are lots of problems like that in BI, and what we need to do is step back and consider all the different kinds of BI that enterprises value and need. More precisely, let’s consider the major kinds of use of BI, because it seems that each calls for different kinds of technological support. Here’s one possible list:
- Early warning of situations that require action.
- Communication of company results.
- Deep analysis and decision support.
- Operational analytics.
Here’s what I mean by each category. Read more
Data mining is hugely important, but it does have issues with accessibility. The traditional model of data mining goes something like this:
- Data is assembled in a data warehouse from transactional information, with all the effort and expense that requires. Maybe more data is even deliberately gathered. Or maybe the data is in large part acquired, at moderate cost, from third-party providers like credit bureaus.
- The database experts fire up long-running, expensive data extraction processes to select data for analysis. Often, special data warehousing technology is used just for that purpose.
- The statistical experts pound away at the data in their dungeons, torturing it until it reveals its secrets.
- The results are made available to business operating units, both as reports and in the form of executable models.
|Categories: Analytic technologies, Data mining, Software as a service, Usability and UI, Verix||6 Comments|
Data mining requires and justifies huge investments. The smallest part is the data mining software itself. A much bigger part is the investment in data warehouse technology, a subject about which I’ve been posting extensively recently on DBMS 2.com. But there’s yet another part to the picture, namely investing in actually gathering data for analysis, that I’ve written about, most recently in a blog I posted elsewhere and am now copying below.
As previously noted, I have a Computerworld column coming out next week on data mining. The heart of the column is an enumeration of markets where data mining applications were having genuine success. Before I sat down to actually write the column, my list went something like this:
- There’s a large set of “early warning” apps where text mining is being deployed. Many of those same apps are addressed by data mining of tabular data too – antifraud, to start with, and also warranty tracking and indeed most of the rest.
- Data mining has been huge in CRM.
- The use of data mining in manufacturing to do failure analysis, improve quality, etc. is really on the rise. This goes at least somewhat beyond what one could reasonably pigeonhole as “early warning.”
- Data mining plays a big role in the life sciences, and is being applied to a broad range of other sciences as well.
- Data mining is a huge part of R&D at search engine and antispam vendors.
My September Computerworld column (I’ll post a link, no sooner than September 11) is about data mining. As promised in that column, here are some links and guides to further work on the subject.
- I have posted extensively on text mining over on the Text Technologies blog.
- In particular, much of the column was based on a post in which I discussed “early warning” applications of text mining.
- The research was informed by a trip to the KDD 2006 conference, about which I’ve blogged separately.
- SAS is world’s biggest vendor of this stuff, so if you want to know what the applications are, you might want to start with their website.
I went to the KDD 2006 (Knowledge Discovery in Databases) conference in Philadelphia last week. It was an interesting, if weird experience. The conference had been billed to me as the place where all the world’s great data mining/KDD experts gather. This turns out to have been old news; the conference has apparently fallen off some the past 2-3 years. What are left are an academic conference and a small trade show that seem to be only loosely coupled. Here’s what I experienced at each.