Imagine if the data that you store in the cloud was searchable,
easy to refine, simple to organise, visualise, analyse and
You could share your data in easy to understand visualisations.
You could use your data to help you understand what is going on and what has gone on.
Consolidate the data silos, add data from other sources. Work with all your data, big and small, blended together and all in one place. Combine your data and all your information within your own internal systems. We give you a set of easy to use tools that will allow you to analyse this information. You can explore your data, provide reports, produce meaningful visualisations and dashboards to support your decisions.
There are many cloud solutions that do not go dumb beyond file systems. We provide a system that is aware of what it is storing. A combination of assisted indexing, automated classification and learning processes ensure that the data is doing what you want it to do. It also reports on itself telling you what is happening to your data. By using indexing and meta data the information is searchable but remains secure.
Your data can be examined at the point of capture and classified into catagories. Pattern searches and analysis will allow predictioins to be made. The more you use your data the easier it becomes to use, because the system learns from you. It learns what you want to do. It learns how you ask questions of your data. It learns to predict what answers you need, when you need them and what you want to do with these answers.
The first step is to locate the data and add it to the system. The data could be a mixture of filesystem, direct input sources (e.g. images, text from email, documents, online sources or data feeds). The process automatically adds meaningful metadata derived from; the capture method, the capture process, the source, the location and and additional user input such as assigned labels.
The second step is to extract just the data that is meaningful and drop the rest. The process is to search through the initial data that was captured in stage one and find data that is or ought to be related. The searches can be run many times refining the data along the way. Deep indexing, classification and the creation of metadata is automatically generated and stored.
The third step is to define the relationships and perform searches that can process the data sets and present clear and meaningful results. This would mainly be a combination of simple plain text searches and easy to combine functions such as adding, subtracting, multiplying and dividing values within defined ranges and creating total, average, maximum and minimum summaries.
By looking through data, searching it as one would search the web or any other media and then organising the result into a model or a representation of events we move from a dumb store to Information.
Previously to make these searches demanded a knowledge of structured query languages such as SQL. More often than not this required a request to the IT department. XCiPi uses intuitive and non-technical ways to access and work with data such as natural language queries (NQL) and drag and drop interfaces. It helps you classify your searches and repeat them with new parameters.
The tools that handle large, unstructured and diverse sets of data, such as Twitter, Google, Pinterest or Facebook, open up ways to understand your data. The process of searching and then broadcasting the results provide more than just a glimpse of the possibilities for gathering business knowledge and insights.
In the same way that we gather information and facts from the web, the challenges of navigating and finding what you want in big data sets, are solved by tools that can equally be applied to collections of your business or other data. Data becomes information and then turns into knowledge.
The application makes sense of your data by distilliling it into meaningful segments and classifications. It does this by searching through it, blending it, refining it and analysing it.
Once the data is refined it can learn new ways of organising itself. As additional meta data is added it will develop a structure, a meaning, relationships and assume order.
A set of business rules can be designed that add aggregated results
to the new found data sets. Simple forms of aggregation such as
SUM, AVERAGE, MIN, MAX etc. can be combined with filters such as
DATE between then AND now. These can be saved as business rule
recipes that can be reused, refactored or applied to other result
Now you have knowledge!
Most of those who need to access data do not have programming skills or in depth knowledge of how to work with a data store. This does not mean they do not understand data it is just not accessible to them. XCiPi provides access with easy to use intuitive tool sets.
Large amounts of data are difficult to understand but graphical representations are able render it meaningful in many ways. These views can be interactive (drag the dots (nodes)) in the image.