Big Data is a Big Deal

Big Data is a Big Deal

bigdata2Last week on Tuesday, June 7, the MinneAnalytics organization hosted the annual Big Data Tech 2016 conference to study and share views on the rapid changes occurring in online data science. Rob wrote about the conference in a post a short while ago, and after studying my notes, here are my observations. Throughout the conference, I was struck by the change in tone from the BDT conference last year, which had been attended by many of the same 1,000 or so similarly talented data experts. At last year’s conference the theme seemed to center on this challenging question: “How do I sell analytics to the management of various retail, health, financial and service management companies?” Similarly, “How can one use the volumes of data being collected in strategic decision making?” The theme at this year’s conference oriented more toward rolling up our sleeves and getting to the work of sharing methods and experiences in the implementation of the rapidly evolving software tools and technologies. As a marketing company in this space, the Positively Different Marketing services of Axiom are more important now than ever before, as we help our clients adapt to the realtime transaction processing and analysis of Big Data. This analysis now takes place at the online point-of-sale, rather than by using traditional analytics which has used post-sale summaries of total transactions.  We understand why C-level management needs new tools rapidly and readily available to perform this in realtime analytics. The keynote address of Big Data Tech 2016 was by SriSatish Ambati, CEO of the fast-growing California AI company, H20.ai. At first, I wondered why H20 should be the name of an open-source AI software company with Big Data solutions. As his theme developed, I realized the commonsense answer to the Big Data dilemma is all about the need to extract information. This information comes from the fast flowing stream coming out of the reservoirs of data stored in the cloud. Mr. Ambati further offered encouragement to the attendees to open their minds by freely sharing ideas and creativity. He also encouraged us all to try out the (free) H2O open-source software demonstration made available through the R-language users group CRAN. I came away with great anticipation for the rest of the day. The tracks that I followed throughout the day included:
  • A peak at the architecture of Thomson West Westlaw Big Data solution, serving the big data information click events of the clients of their 20,000 supported law offices.
  • Hands-on training in the use of readily available state-of-the-art data manipulation software like Apache Spark, Scala, Python, R and other tools to access the data residing in the cloud using Hadoop Distributed File System Protocol.
  • Classical examples showing why data residing in the cloud must be processed using new and different operating systems, as well as distributed software which executes through the cloud.
  • Visualization tools showing summarized selections and clustering of internet events coming from the clicks of connected devices, as well as the behavior of those customers generating the clicks.
The beat goes on. So the same summary statistics that we have been processing at Axiom, which reveal anomalies in monthly sales time series, are now brought down to the immediate daily and hourly event-oriented data points which document the attributes of each individual transaction. The component that we continue to add to each Big Data query is the correlation match-up of the external factors of weather and government economics on the transaction. In a follow-up blog, I’ll be showing how we do this in the Big Data world around us. For more information, contact mreiber@axiomcom.com.
No Comments

Post A Comment