It hasn’t been that long since artificial intelligence began its journey out of the realm of sci-fi novels and into our daily lives. Perhaps because of its recency, AI’s transition into real-world systems and technologies has been both inspiring and unsettling, a tension that is just as strong in debates around its future. What should AI become? Who should it serve?
In this week’s Variable, we share two eye-opening contributions to this conversation. If you prefer to keep things more actionable, however, have no fear: we also include some of our recent favorites on topics like MLOps and model stacking. Let’s get to it!
- Learn about the risks of corporate-led AI research. The major progress we’ve seen in recent years in areas like reinforcement learning comes at a steep cost, and tech giants like Google and Facebook have the deep pockets to cover it. Is that the right way to go about it?
Travis Greene asks a key question, which he goes on to answer with nuance: “Should we trust that market-driven AI research and development in the ‘private interest’ will align with human-centric values of transparency, justice, fairness, responsibility, accountability, trust, dignity, sustainability, and solidarity?”
- Explore a potential alternative to the dominance of language models. The most visible examples of AI’s recent strides are likely massive language models like GPT-3 and BERT. In a recent episode of the TDS Podcast, Diffbot CEO Mike Tung chatted with Jeremie Harris about another promising path for developing AI’s future capabilities: knowledge graphs.
- Experiment with stacking to improve your model’s performance. If you only have time to tinker with one hands-on tutorial this week, it might as well be Jen Wadkins’s step-by-step intro to model stacking, an approach that boosts outcomes by taking predictions from several different models, and then using them “as features for a higher-level meta model.”
- Get comfortable—or at least better acquainted—with MLOps. Machine learning operations has been a buzzy subfield for a while now, and Yashaswi Nayak’s extremely accessible guide is a wonderful resource for anyone who’d like to learn more about it. Yashaswi begins with basic definitions and then walks us through an entire MLOps lifecycle from infrastructure to deployment.
- Prepare for a graduate degree with practical, hard-earned advice. If you’ve been contemplating going back to school for a master’s in data analytics or data science, you’ll want to catch up with Isabella Velásquez’s account of her own experience a few years back. It includes many practical insights that can set you on the right path and help you make the most of your program.
Thank you for joining us on another week of exciting and thought-provoking articles! If any of these posts inspires you to write your own take on the future of AI, machine learning, or another topic entirely, consider sharing it with our team.