The recent merger of Hadoop-oriented arch-rivals Cloudera and Hortonworks into a $5.2 billion tech giant have brought the critics out of the woodwork. Of these, Bloomberg columnist Shira Ovide was most emphatic in calling the marriage of the two unprofitable companies “a seafaring union of two underwater companies.”

Hadoop’s container infrastructure has been rendered irrelevant in the face of cloud both in cost and ease of use. There have been efforts to make Hadoop more cloud friendly but the way forward needs a lot more work.

Alex Woodie writes about Hadoop’s future in this report published by Datanami:

It begs the question: Will Hadoop still be Hadoop when YARN is replaced with Kubernetes and HDFS is replaced with whatever S3-compatible object storage system emerges as the winner? If you consider Hadoop to be a loose collection of 40 open source projects – HBase, Spark, Hive, Impala, Kafka, Flink, MapReduce, Presto, Drill, Pig, Kudu, etc. etc. etc. – then maybe the question is moot.

From a practical standpoint, there’s just no way that customers are suddenly going to turn off the millions of Hadoop nodes that have been deployed over the years because the two biggest Hadoop distributors are consolidating. For the thousands of companies that have built Hadoop data lakes, the focus will remain the same: Figuring out how to get value out of all that data.

While classic Hadoop may be a legacy technology, there’s still incentive in the community to adapt it to support emerging requirements, just as IBM had done with its mainframe platforms. The question is whether it can catch up fast enough to keep the installed base growing.