Boston Dynamics’ New Atlas Robot Is a Swiveling, Shape-Shifting Nightmare

Jess Weatherbed reports via The Verge: It's alive! A day after announcing it was retiring Atlas, its hydraulic robot, Boston Dynamics has introduced a new, all-electric version of its humanoid machine. The next-generation Atlas robot is designed to offer a far greater range of movement than its predecessor. Boston Dynamics wanted the new version to show that Atlas can keep a humanoid form without limiting "how a bipedal robot can move." The new version has been redesigned with swiveling joints that the company claims make it "uniquely capable of tackling dull, dirty, and dangerous tasks." The teaser showcasing the new robot's capabilities is as unnerving as it is theatrical. The video starts with Atlas lying in a cadaver-like fashion on the floor before it swiftly folds its legs backward over its body and rises to a standing position in a manner befitting some kind of Cronenberg body-horror flick. Its curved, illuminated head does add some Pixar lamp-like charm, but the way Atlas then spins at the waist and marches toward the camera really feels rather jarring. The design itself is also a little more humanoid. Similar to bipedal robots like Tesla's Optimus, the new Atlas now has longer limbs, a straighter back, and a distinct "head" that can swivel around as needed. There are no cables in sight, and its "face" includes a built-in ring light. It is a marked improvement on its predecessor and now features a bunch of Boston Dynamics' new AI and machine learning tools. [...] Boston Dynamics said the new Atlas will be tested with a small group of customers "over the next few years," starting with Hyundai.

Read more of this story at Slashdot.

Boston Dynamics’ New Atlas Robot Is a Swiveling, Shape-Shifting Nightmare

Jess Weatherbed reports via The Verge: It's alive! A day after announcing it was retiring Atlas, its hydraulic robot, Boston Dynamics has introduced a new, all-electric version of its humanoid machine. The next-generation Atlas robot is designed to offer a far greater range of movement than its predecessor. Boston Dynamics wanted the new version to show that Atlas can keep a humanoid form without limiting "how a bipedal robot can move." The new version has been redesigned with swiveling joints that the company claims make it "uniquely capable of tackling dull, dirty, and dangerous tasks." The teaser showcasing the new robot's capabilities is as unnerving as it is theatrical. The video starts with Atlas lying in a cadaver-like fashion on the floor before it swiftly folds its legs backward over its body and rises to a standing position in a manner befitting some kind of Cronenberg body-horror flick. Its curved, illuminated head does add some Pixar lamp-like charm, but the way Atlas then spins at the waist and marches toward the camera really feels rather jarring. The design itself is also a little more humanoid. Similar to bipedal robots like Tesla's Optimus, the new Atlas now has longer limbs, a straighter back, and a distinct "head" that can swivel around as needed. There are no cables in sight, and its "face" includes a built-in ring light. It is a marked improvement on its predecessor and now features a bunch of Boston Dynamics' new AI and machine learning tools. [...] Boston Dynamics said the new Atlas will be tested with a small group of customers "over the next few years," starting with Hyundai.

Read more of this story at Slashdot.

Feds Appoint ‘AI Doomer’ To Run US AI Safety Institute

An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation. There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said. As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully." "In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement." Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."

Read more of this story at Slashdot.

Feds Appoint ‘AI Doomer’ To Run US AI Safety Institute

An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation. There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said. As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully." "In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement." Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."

Read more of this story at Slashdot.