(TechCrunch) Anthropic has published the system prompts for Claude, its family of generative AI models. They instruct the model how it should — and shouldn’t — behave. Read more here.
(NYT) Pavel Durov, the founder of the app, which has more than 900 million users, was taken into custody by the French authorities. Read more here.
(TechCrunch) After fintech Bolt surprised the industry with a leaked term sheet that revealed it is trying to raise at a $14 billion valuation, things got weird. Read more here.
(Gizmodo) Twin profiles of Thiel-backed, defense-minded, Silicon Valley moguls give us a glimpse of the future of war, and America. Read more here.
(The Information) So far, many of the improvements in large language models’ capabilities have stemmed from a surprisingly simple concept: scaling laws. Essentially, researchers have noticed that the more computing power and data you use to train AI models, the better they perform. Read more here.
(ROW) “These big companies think they can enter small villages like ours, take our land, and destroy it.” Read more here.
(TechCrunch) California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents. Read more here.
(FT) Web publishers say developer is swarming their sites, collecting content to train models and ignoring orders to stop. Read more here.
(WSJ) Artificial intelligence was still the main source of profit growth for the S&P 500 this earnings season. But AI-driven firms are largely boosting each other’s profits. Read more here.
(WSJ) Nvidia, like Apple, shows that if you want to become a giant, you’ve got to be as good at software as you are at hardware. Read more here.