Industry Insights

Background image of user typing on a calculator with floating interface elements surrounding them

February Monthly News Digest: Data Science & Analytics

Posted
February 22, 2023

Welcome to Burtch Works’ Monthly News Digest, your go-to source for a comprehensive overview of the month’s most significant stories in the world of data science and analytics. This month, we delve into the essential role of data product managers in driving business growth, the rising concern of security threats in machine learning, and the potential of language models to revolutionize the field of AI-generated content and media.

What Good Data Product Managers Do – And Why You Probably Need One

As companies increasingly modernize their data departments and invest in their data stack, data product managers are emerging at the helm of such projects. They are responsible for building new products and features based on data insights and ensuring that these products meet both the business needs and the needs of the end-users. A good data product manager must have a deep understanding of both the technical and business aspects of data products, and strong communication and leadership skills to effectively manage cross-functional teams and communicate data insights to stakeholders.This article highlights the critical role of data product managers in driving business growth and innovation, and why companies that prioritize hiring and developing strong data product managers are more likely to succeed in the modern data-driven landscape. Whether you're migrating to Snowflake, integrating with Databricks, or moving towards a data mesh, this article explains why data product managers are essential to building internal data capabilities as a competitive advantage.Read the full article here.

The Definitive Guide to Adversarial Machine Learning

Machine learning is increasingly being used in many applications, such as verifying identity, driving cars, and even writing code. However, with this growing use comes an increasing concern about security threats. Adversarial examples, which are imperceptible changes to input that manipulate the behavior of machine learning models, can result in anything from annoying errors to fatal mistakes.To address this concern, AI researchers Pin-Yu Chen and Cho-Jui Hsieh have written a book called Adversarial Robustness for Machine Learning. The book provides a comprehensive overview of adversarial machine learning, including attacks, defense, certification, and applications. One important point the book raises is the need to rethink how machine learning models are evaluated. Currently, accuracy is the standard metric used to grade models, but this metric does not consider the model's robustness against adversarial attacks. In fact, this article explains that some studies show higher accuracy can actually be associated with higher sensitivity to adversarial perturbations.Read the full article here.

Like ChatGPT? You Haven’t Seen Anything Yet

Language models like ChatGPT have made waves in the AI community, but according to a recent article, we haven't seen anything yet. Researchers are continuing to make advancements in the field of language models, and they're only getting better. The article highlights several research projects that are pushing the boundaries of what we thought was possible with language models, such as creating models that can understand and use sarcasm, model the personality of the writer, and even generate new human-like languages. These advancements are exciting, but they also raise concerns about the ethical use of such models and the potential for them to be misused. Despite these concerns, the potential applications of language models are vast, and we can expect to see even more breakthroughs in the coming years.Read the full article here.

AI Hallucinations: A Provocation

Get ready for a paradigm shift in AI-generated content! While ChatGPT has been criticized for its "hallucinations" - or its tendency to "make up" facts and details - one writer sees it as a potential catalyst for true AI creativity. ChatGPT's "hallucinations" can be seen as the precursor to art, as they represent something that does not exist yet, and what is art if not the creation of something new and original?While most AI-generated art is derivative, ChatGPT's unique ability to imagine the non-existent offers an exciting glimpse into what AI creativity could potentially become. The writer poses the question: what would happen if we trained an AI like ChatGPT to create great stories with literary history and style? Could we build a language model that experiments with imaginative ideas and creates something truly new? This provocative article argues that AI hallucinations may just be the key to unlocking true AI creativity.Read the full article here.

Google Created an AI That Can Generate Music From Text Descriptions, But Won’t Release It

Google has developed an AI called MusicLM, which can generate high-quality music from text descriptions, according to a TechCrunch report. The AI uses natural language processing to interpret music descriptions and then generate corresponding pieces of music. MusicLM was trained on a massive dataset of 280,000 hours of music, allowing it to generate coherent songs for descriptions of "significant complexity".The technology is capable of capturing nuances like instrumental riffs, melodies, and moods, producing results that sound remarkably close to what a human artist might create. Despite this achievement, Google has no plans to release the software to the public, raising questions about the role of AI in music creation and its potential to blur the lines between human and machine creativity.Read the full article here.