| | History teaches us how to face a hotter future Pick up most history books and you'll read about wars, empires, inventions, and political movements. But floods, volcanic eruptions, crop failures, and harsh winters have had just as much impact on humanity. In this Bloomberg Green interview, Peter Frankopan, a professor of global history at Oxford University and the author of The Earth Transformed: An Untold History , argues that now, more than ever, it's important to consider the past effects of the environment on humanity (and vice versa) to understand what leads to resilience—or disaster. Climate patterns have impacts that can typically reduce harvest levels, increase inflation, lead to cost of living crisis, calorie shortages, reduced immune systems and prevalence of disease. That's all happened in the past, but normally not all at the same time. The difference about today's world is that 90% or 98% of the world is warming at the same time. That's unprecedented. The choices we make now could have a pretty profound impact on our outcomes. Frankopan, while optimistic that humanity can successfully adapt to climate change, warns that the pressures we face are "like little cracks on a glass. Will they turn into something much bigger? Can one person throw a stone and break the whole thing? How do you live in a world where you're not scared of that? How do you prepare for it?" + One thing is clear: "artificial intelligence isn't going to magically fix our problems." For that, as we discussed last week, we'll need effective governments implementing policies set up for success. But it's not all bad news. As Lara Williams points out in Bloomberg, "Used intelligently and sensitively, machine learning can be harnessed to bolster people power in the battle to save the planet." | | | | | ChatGPT is about to revolutionize the economy. We need to decide what that looks like. Speaking of AI, here's another situation where deliberate choices could lead to better outcomes. As David Rotman writes in MIT Technology Review, "New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us." ChatGPT and other generative AI models could automate all sorts of tasks that were once considered outside the realm of automation, from creating graphics to analyzing data. Economists are unsure how this will play out—or how jobs will be affected. Generative AI models hold the promise of jumpstarting our stalled productivity growth in great leaps, but they could also make the income and wealth inequality in the US even worse. As Rotman writes: The optimistic view: it will prove to be a powerful tool for many workers, improving their capabilities and expertise, while providing a boost to the overall economy. The pessimistic one: companies will simply use it to destroy what once looked like automation-proof jobs, well-paying ones that require creative skills and logical reasoning; a few high-tech companies and tech elites will get even richer, but it will do little for overall economic growth….Determining which scenario wins out will require a more deliberate effort to think about how we want to exploit the technology. + Get an insider's look at ChatGPT and generative AI on O'Reilly. + More from MIT Technology Review: "How to Solve AI's Inequality Problem" + From the Wall Street Journal: "ChatGPT Fever Has Investors Pouring Billions into AI Startups, No Business Plan Required" (paywall) | | | | | You can't regulate what you don't understand The EU just passed one of the first major laws regulating AI. But you can't govern what you don't know about. Tim O'Reilly, Mariana Mazzucato, and Ilan Strauss of the UCL Institute for Innovation and Public Purpose have published a policy brief on why we need a new disclosure regime for the operating metrics used by Big Tech companies to manage their businesses. (The rise of generative AI tools like ChatGPT has only intensified this need.) As they point out: The current disclosures framework for public companies—the annual 10-K financial report in the U.S. and related IFRS-governed filings in the European Union—was designed for industrial economies based primarily on physical assets and in-person consumption. By contrast, today's technology companies derive their value from intangible digital marketplaces and platforms. Since technology shares account for 27.3% of total US market capitalization—roughly equivalent to materials, energy, utilities, and industrials combined—the failure to update disclosure regulations for these radically different businesses is a glaring omission. Big Tech presents a number of challenges to current disclosure regulation: Reporting focused only on financials ignores the enormous market power of zero-priced products, which may have dominant market share in a category without any required reporting. Meanwhile, existing "segment reporting" rules haven't scaled with firm size. And segment reporting rules also allow too much management discretion on what is reported and how. The brief explores these problems in detail and makes a number of concrete suggestions for how we might bring disclosures for internet-scale companies into the 21st century. Read it to learn why a better disclosure framework could be a "crucial first step to understanding digital power." While the UCL report focuses on more "traditional" Big Tech areas like search, social media, and ecommerce, Tim has written why mandatory disclosures are also an essential first step toward regulating AI. + You can get a good overview of the issue from Tim's recent article in the Evening Standard: "The First Step to Proper AI Regulation Is to Make Companies Fully Disclose the Risks." + More from Tim on what the failures of corporate governance can teach us about AI regulation: "The Alignment Problem Is Not New." | | | |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.