I am not sure that the answer to this question is yes. Sure you can find a lot of vendors who’d claim that they already have a product that can miraculously convert raw data into precious information, but is this really the case? I doubt it
My own experience with real life data has shown me that generating information from data is a complex process. It almost always requires understanding of the domain, in which the data is generated, and moreover, the issues with data make the whole thing an ad-hoc process. I’ve asked the Weka mail list about commercial tools that can be considered as alternatives to Weka, and the responses so far confirm my feelings about the domain: you can’t simply throw in a product and expect people to get knowledge out of data. You’d need a substantial amount of knowledge about machine learning and data mining, how these tools and more important algorithms work. Most of the time, you’ll need to adjust many parameters, transform data, and iterate continously till you have a pipeline that connects raw data to some form of decision making process. Sure there are tons of ETL tools, Business intelligence tools etc. These are either infrastrucutre tools for transforming and/or moving data or glorified reporting tools.
The machine learning domain is a moving target with new methods introduced continously, and I can not think of a product that can integrate these methods in a way so that someone with no knowledge of machine learning or data mining can use it to create meaningful information.
I’ve always considered data mining and related decision support domain as a service based industry. Weka seems to be the most suitable tool for this domain, but I have my doubts about scalibility problems when using SVMs, or learning Bayesian Networks from data. I’d like to see how GPU based solutions like Cuda from Nvidia would perform in these cases, and more important than that, would it be possible to parallelize these algorithms for multicore cpus or gpus (maybe even for cell, the horsepower of PS3).