The protection of human rights should be front and centre of any decision to implement AI-based systems regardless of whether they’re used as corporate tools such as recruitment or in areas such as law enforcement.
And unless sufficient safeguards are in place to protect human rights, there should be a moratorium on the sale of AI systems and those that fail to meet international human rights laws should be banned.
Those are just some of the conclusions from the Geneva-based Human Rights Council (HRC) in a report for the United Nations High Commissioner for Human Rights, Michelle Bachelet.
“The right to privacy in the digital age” [download] takes a close look at how AI – including profiling, automated decision-making, and other machine-learning technologies – affects people’s rights.
The UK government has published its much-awaited National AI Strategy in pursuit of “global science superpower” status.
The document talks of plans for a “new national programme and approach to support research and development” plus a government white paper on the governance and regulation of AI [PDF].
Details of the strategy were trailed back in January when the AI Council published its “AI Roadmap” including 16 recommendations to the government.
Also, GM is transforming from an automaker to a platform innovator.
Waymo can also drive its self-driving taxi fleet in San Francisco plus San Mateo; its cars are allowed to go faster, up to 65 miles per hour, meaning they can drive on highways. Again, they can only operate up to light rain and light fog, and no hours of operation were specified.
The vehicles have to be capable of performing with Level 3 autonomy or higher.
They might be self-driving, but there has to be meat behind the wheel.
“….there is still a need for human-driver in the worst case scenario or emergency, or to apply brakes or halt the car. Most level 3 cars have a speed limit that requires no human intervention, but beyond that, the driver has to be alert.”
I’m really interested in what’s being tested here. I did a bunch of work last year on the Aus regulatory regime for CAVs, which assumed Level 4 & 5 vehicles, and Level 3 is the transitional space where who/what is responsible for the vehicle isn’t as clear.
Would you like AI with that?
Big Blue has taken a bite out of McDonald’s, acquiring the burger chain’s automated order taking (AOT) tech – and the “McD Tech Labs” that built it, for an undisclosed consideration that may or may not include an upsold serve of fries.
The labs were created after Mickey Dee’s 2019 acquisition of AI voice recognition startup Apprente – a deal touted as giving the burger-slinger the special sauce needed to build voice recognition tech into mobile ordering services or McKiosks.
It’s better to work for the Man than the Machine
Applying AI and automation to jobs can have both positive and negative impacts on workers, according to a new study.
“The impact of automation and artificial intelligence on worker well-being”, by Georgia Institute of Technology boffin Daniel Schiff and Georgia State University’s Luisa Nazareno, found workers in jobs that become more automated experience lower levels of stress – but their health and job satisfaction both worsen.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
The USA’s Department of Defense has created a post of Chief Digital and Artificial Intelligence Officer (CDAIO) that is expected to encompass three existing leadership offices and develop advanced fighting capabilities.
“The Department has made significant strides in unlocking the power of its data, harnessing artificial intelligence (AI), and providing digital solutions for the joint force,” reads a DoD memo [PDF] released Wednesday by deputy defense secretary Kathleen Hicks.