Reply is the place to meet an incredible variety of enthusiastic, passionate, ideas-driven people, who want to make a difference and an impact.Would you like to know more?
The biggest problem in the explosion of data is that it is becoming harder to find the meaningful aspects buried within the sheer volume of data available. In some ways it reflects a phenomenon that most of us will be familiar with – internet search. Whilst internet search has increased radically in sophistication many would argue that it is hard to actually find relevant information now than it was 15-20 years ago when search was in its infancy. This is because against the backdrop of that one definitive source of information there are now a thousand more semi-relevant sources.
The design of a solution enabling Data Visualisation is best served with a toolkit of tools rather than buying one product. This is because there are many pre-processing tasks where specialist tools can help improve the relevance of the initial data set, as well as tools being required to make data available in the first place. There is no one-size-fits-all situation in this market, and indeed many would separate the acquisition and pre-processing of data from the visualisation even though it is intrinsically linked.
The big architectural challenge in this area is arranging the right tooling to solve the real problem as opposed to the perceived problem. A combination of tools might be required in order to deliver a necessary outcome and they need to be well planned in a technological aspect. Information architecture is absolutely key to the problem space, as the need to present different sources delivering the same types of data in the same way is paramount to the downstream visualisation aspects (i.e. can the data be structured to enable visualisation). Whilst a simple solution might yield good results with a narrow area of interest, inevitably as the solution evolves and covers a wider aspect it can easily have fundamental flaws that prevent this evolution. There are techniques out there that can help with an incremental development of the supporting information architecture and this is highly likely to be required for initial implementations. At the coal-face of providing data with the potential to be visualised this is the only way to incrementally deliver a solution, as opposed to some sort of big-bang approach which doesn’t seem feasible in this area.
There are also complex aspects to consider related to the volume and density of the data that should be provided for visualisation, and for this area it is basically a form of pre-analysis. Sometime there are existing tools in an organisations estate that can fulfil this function even if they have not been used for this specific purpose.
The visualisation technology is all about turning the raw data into a visualised output where meaning can be derived. This is the forte of tooling but there are several tools out there and different tools are better at different types of visualisation.
Whenever considering Data Visualisation the focus has to be on presenting information in a way that is most beneficial to the consumer of that information. That means different user groups (and potentially different demographics within those user groups) will potentially benefit from different visualisations. The art in the visualisation is as much about identifying the types of view and the tooling to support the types of view that will be most beneficial to users as it is about the visualisation itself. The problem is also compounded by the fact that this area is immature, and so a user may have never seen a representation that is intrinsically in line with what they would want to see. The architectural aspect here is separation of the model and view in effect, and providing an architecture that allows a suite of views potentially from a suite of tools. This then creates the end-to-end solution.