By Lars Hofhansl
My last post here was in 2019.
BigData has had an "interesting" journey. From the "thing" that everybody needed but did not quite understand, to lots of fragmented solutions, to some very large installations, to partial irrelevance due to the public clouds and some kind of coming back to "small" data.
HDFS, HBase and Phoenix were part of this ride, and the almost 10 years I worked on those were some of the best in my career. Open Source is fun, and I was lucky enough to provide 100's of changes and improvements to these projects.
Since 2020 I had worked a bit on Trino, Kafka, Spark, and some other projects, but have since come full circle and now work on (and with) more "traditional" databases. Some of these are internal, and I can't really talk about them... Think equivalents to PostgreSQL and DuckDB, resp.
Since every single comment in the last few years was spam about some Hadoop training or the other - and I am flattered that someone might think to reach anyone through "advertising" on my HBase blog - I have now disabled all comments.
Except for Trino, Kafka, and Spark (and perhaps a few more) the BigData boom is over or at best in maintenance or legacy mode.
It was a good run, I learned a lot, had a lot fun, deployed a few systems serving trillions of requests/transactions.
So Long, and Thanks for All the Fish