Quote from the article illustrating the issue of decisions that are implemented at scale
He poses a trolley-problem scenario to illustrate. "Say a car is driving in the right lane, and there's a truck in the lane to the left and a bicyclist just to the right. The car might edge closer to the truck to make sure the cyclist is safer, but that would put more risk on the occupant of the car. Or it could do the opposite. Whatever decision the algorithm makes in that scenario would be implemented in millions of cars." If the scenario arose 100,000 times in the real world and resulted in accidents, several more--or fewer--bicyclists could lose their lives as a result of the machines' decision. That kind of tradeoff goes almost unnoticed, Awad continues, when we drive ourselves: We experience it as a one-off. But driverless cars must grapple with it at scale.