The scope of work that our team can do

Extracting information from unstructured data sources: natural language texts, images, multimedia data sources, as well as query execution methods among heterogeneous sources of large data sets.

Approximate query and assess the quality of the results of analytical processing (information extraction, query processing, query optimization, approximate query evaluation, heterogeneous information resources, data quality).

 

Declarative specification of scenarios of analytical workflows and methods of optimization of these scenarios.

 

Adaptive and scalable algorithms for the main analytical data processing operations suitable to run on high-performance parallel computing complexes.

 

Evaluation of the original data and the results of their processing, development of high-level semantically rich information flows from the low-level data.

Big Data - Storage and Processing 

This area of our activity is associated with the analytical processing of large data sets, produced by Information Collection Systems such as sensor networks, mobile communication networks and computer networks such as Internet social networks.

According to IDC Digital Universe Study study, each year around the world, over 2 zettabytes (2 trillion Gigabytes) of Data is created and replicated. Moreover, as the speed of data flow growth will exponentially increase in the next decade, the volume of information can be increased up to 50 times more. Using traditional methods, it is impossible to work effectively with such amount of information.

 

Huge growth of information that flows around the world required the development not only of new means of storing, but also building a fundamentally different treatment models, search, analysis and use. There is a constant need for the development of software and hardware systems, using new approaches to data retention, creation of which could lead to breakthroughs in business.