Michael Stonebraker expects a substantial increase in the number of New SQL engines using a variety of architectures in the near future.
Good to see this topic and article; I enjoyed your recent very informative present at the NIST symposium on Big Data (http://www.nist.gov/itl/ssd/is/big-data.cfm).
Taking data in is import, but also sending data out and results information sharing.
Our open source project has developed code-free templates to allow rapid XML open data services to be built from SQL sources so you can quickly and easily present and share results and knowledge.
See our online video training materials for more details: http://www.youtube.com/user/TheCAMeditor
Oracle Public Sector
Meh, I'm not convinced. I think that the "Old SQL" solutions will adapt and be just fine. I think what we are seeing in data is a blip, and that the big vendors like Oracle and Microsoft will end up taking care of most of these issues within the next 1-2 years.
I think what will happen is that the best aspects of NoSQL and so-called New SQL will get integrated into their products and as a result those products will be much better than the one-off specialized products.
This is already happening with in-memory, which is largely what I assume you are talking about with "New SQL". MS and Oracle are both working on in-memory solutions and putting together "out of the box" systems with massive throughput. Yeah, they are expensive, but still. MS's new Parallel Data Warehouse can run massive queries on petabytes of data in milliseconds now, I mean...
With the implementation of writable columnstore indexes (coming soon on SQL Server) you'll be able to do blazingly fast queries against OLPT systems, without locking...
The problem with NoSQL and New SQL systems is typically that they are very specialized and very good for a very narrowly defined set of tasks. But the older systems have a much larger established infrastructure and ecosystems of tools, and so if you can add those new capabilities to the older systems you get a much more powerful and well rounded system than if you simply adopt some new specialized system, and it will be much more expensive for new players to build all of the stuff that the older guys than than the other way around.
I would also submit that while the VOLUME of "new forms of data" will definitely exceed the volume of older forms of transactional data, the VALUE of older forms of data will remain higher. There is a reason that systems evolved the way that they did. Essentially the most valuable problems were tackled first. Yes, all of that phone and web data is interesting, but a single row in a traditional RDBMS is worth thousands or millions of "rows" of unstructured or "New" data.
And this is the issue, yeah there is value in data mining weblogs and location data, etc., but its like the difference between mining for gold and mining for aluminum. In order to make money mining aluminum you have to process a lot more of it to get the same value you get from a few ounces of gold.
So every "ton" of "new data" is worth about "1 pound" of "old data", if you see what I mean.
What this means is that the "value density" still resides with the older systems, and that's why I think that things will end up coming back to them. They are the anchors of the data world.
Which open source database engines would you categorise as "New SQL"?
Displaying all 3 comments