Kepler Invests in its Greatest Resource: Our Talent

Kepler Cannon is a very social firm: we greatly enjoy our bi-weekly happy hours, team lunches in the office on Fridays, and occasional Fitness-Wednesday workout sessions. At times, however, our inner nerds emerge; and when it comes to learning and continued development, Kepler Cannon doesn’t cut corners. Whether it’s our training programs, monthly book allowance, online courses, or conference programs, the firm continues to invest in its consultants.

As a result of Kepler Cannon’s commitment to learning, I had the opportunity to attend the Machine Learning Summer School 2015 in Kyoto, Japan. While the nature of the conference was primarily academic, it was fascinating to see how certain machine learning topics find direct application in our daily work (especially our analytics-related efforts).

While in Kyoto, I had the chance to learn from a variety of world-renown researchers and professionals. What follows below is a quick summary of the two most memorable lectures, covering topics that lend themselves particularly well to some of the analytics work that we do at Kepler Cannon.

The perhaps most engaging lecture at the MLSS was delivered by Prof. Stephen Boyd from Stanford University, who introduced key ideas as well as specific applications of convex optimization. This type of optimization challenge exists everywhere in the machine learning field and has very direct real-world application (e.g., determination of optimal weights, selection of cluster assignments, approximation processes, etc.). Prof. Boyd even prepared a Convex Optimization Short Course – available to the general public for experimentation!

The second most memorable lecture was held by Vincent Vanhoucke, a Principal Scientist at Google. After a general introduction to Neural Networks and Deep Learning, Mr. Vanhoucke provided an overview of the basic principles and most recent achievements in both fields. One of his key messages – and something I have encountered myself – was that deep learning techniques are usually not the most effective way for getting relevant results specific problem prompts. In prominent Machine Learning competitions (e.g., Kaggle), it is oftentimes the simpler techniques (e.g., logistic regression & random forests) that perform best.

In summary, MLSS was very exciting! Throughout my time in Japan, I was exposed to most of the latest machine learning techniques – while simultaneously being able to think about practical real-world applications and explore the various historic sites in Kyoto. While the underlying mathematics tend to get difficult rather quickly, it was re-assuring to see that the main ideas and core problems remain simple in their essence. So, just like in strategy consulting: when tackling complex problems, it is often best to start with the simplest approach, and gradually and methodically expand from there.