BRISTOL — For some, artificial intelligence may conjure sci-fi movie plots and fears of sentient robots challenging humanity. Not Bruce Duncan. The greatest threat to the future of AI, he believes, is ignorance.
Duncan discussed his concern as he sat only a few feet away from Bina48, a humanoid robot he helped create that’s able to analyze and respond to hundreds of conversation starters. AI algorithms allow her to act, think and respond like a human.
Although she’s just a head and torso, sitting atop a desk in the living room of a house at the end of a winding dirt road, she can concoct facial expressions with a bizarre form of human vulnerability, as her brown eyes track the room around her.
Bina48 is programmed with artificial intelligence, a type of technology that many scholars have argued over how to define, but broadly imitates “human” decision-making and learning.
She’s a creation of the Terasem Movement Foundation, headquartered in Bristol, Vt., of which Duncan is managing director. With Bina48, the nonprofit is testing a hypothesis that human consciousness can be digitally stored, and maybe one day physically stored in a cloned human form.
While his project raises many futuristic questions, Duncan is more concerned about how AI may be exploited for unjust purposes present day. He’s concerned that if public officials are ignorant about the power the technology has, massive problems could arise.
Rep. Brian Cina, P-Burlington, shares those concerns. He led the creation of a task force that has been studying the growth of AI in Vermont since September 2018. After collecting testimony and public feedback, the task force is getting ready to submit its final report in January.
“Whether people like it or not, artificial intelligence is a quickly growing and changing technology. And it’s progressing at exponential rates,” Cina said last month. “Too often in human history we have failed to really look at the full scale of benefits and consequences of our choices … Now we have a chance to get ahead of the curve.”
In milking barns and on the internet
While Terasem is exploring the potential possibilities of AI, other forms of the technology are being used every day in businesses throughout Vermont.
VTDigger is underwritten by:
Thanks in part to AI technology, Bob Sunderland no longer has to wake up at 4 a.m. for the morning milking on his Bridport dairy farm. Now, that’s a job for a robot.
Sunderland owns four robots programmed with AI, called Lely Astronauts, that have transformed the way he milks his more than 200 dairy cows. Milking used to take four hours each day — cows needed to be fetched and individually milked, all while Sunderland and his staff dodged rogue cow kicks. Now, milking happens all day long, with no human interaction required.
With the Lely Astronaut, milking sessions are voluntary. On a recent fall day, cows, incentivized by grain, sauntered up to the machines and munched on their snacks while they waited for the robots to take action. There’s no orderly factory line of cows — most are lounging on the ground of the airy milking barn, chewing cud. The cows choose when they want to get milked, Sunderland said, so they’re not in a rush.
“The big goal is cow comfort,” Sunderland said. “If you keep a cow comfortable and healthy, they’re going to reward you for that.”
The Lely Astronaut starts by cleaning the udders with a circulating brush mounted on a metal arm, and finishes with a disinfecting spray. Then it uses laser technology to locate each teat and attach pumps that milk the cow. The whole process takes only a few minutes.
The machine does more than milking. AI predictive technology collects data about the cows as they’re being milked. This data is analyzed and then used to predict the best time to breed a cow and the best time to milk a cow to optimize milk production. The machines also track the health of each cow and notify Sunderland if they’re showing signs of sickness, a technology he calls “cow CSI.”
Sunderland bought his four machines in May 2017 for about $200,000 each. While the Lely Astronauts may not be financially accessible for every farmer, Sunderland said they’ve paid off for him. Since he bought the machines his milk production has gone up by about 15%. They’ve also saved him money he would have had to spend on milking staff, which Sunderland said are less desirable jobs that are sometimes difficult to fill.
AI is also being used to direct marketing campaigns in the state. Faraday, an AI marketing company in Burlington, is working with Vermont businesses and the state to develop AI technology that more accurately targets campaigns to potential consumers.
Faraday, which works with the Vermont Department of Economic Development’s ThinkVermont initiative, uses predictive AI technology to identify the profiles of people who are most likely to move to Vermont. According to Robbie Adler, Faraday’s chief strategy officer, the company culls public data, like the Census, and data licensing company information to generate profiles. Then, people who match those profiles are targeted with ThinkVermont ads on their Facebook and LinkedIn feeds.
“There is a real focus on how we can really identify people who are most receptive to the message of coming to Vermont,” Adler said, “not only as a place to come and visit, but as a place to live.”
Consensus AI, another data analysis company for government organizations, partnered with the city of South Burlington on a project that aims to stimulate democratic engagement through data collection.
The company launched an app in June that asks residents questions about whether they ride a bike to work or how well their roads are plowed in the winter. AI machine learning technology analyzes the data, which is then sent to city officials to inform their decision making.
Vermont’s Agency of Transportation is also eyeing the potential of AI. Safwan Wshah, an assistant professor of computer science at the University of Vermont, is working with VTrans to develop a tool that uses AI to map and geolocate traffic signs on the state’s highways and roads.
The state tracks and maintains all of its traffic signs in order to keep its infrastructure up to date, Wshah said. To do this, a van drives all over Vermont with a camera attached, and it locates and snaps pictures of every sign it passes. The picture and GPS location of that sign is then recorded on a virtual map.
The AI technology organizes this data, so when a sign in a specific location needs to be replaced or a staff member needs to know how old a sign is, they can consult the virtual map and get the information they need instantly. Without AI, the organization and identification would have been done manually. A job that previously took hours, now takes seconds, Wshah said.
Wshah said he understands why the public may be cautious to accept intelligent technology into their daily lives. But Wshah sees AI as a tool to help save lives in the future. Because over 90% of all car crashes are caused by human error, Wshah said his technology could one day be used in self-driving cars to locate guard rails and other objects in the road in order to prevent accidents that humans can’t.
“At the end of the day, if I can trust AI more than a human,” Wshah said, “why aren’t we going that far?”
Task force hesitant to regulate AI
Vermont is one of only a few states in the nation to begin researching AI’s impact on local populations and economies.
The group’s preliminary recommendations has drafted some tentative recommendations based on the feedback and research it has collected so far. They are likely to urge the Legislature to create a permanent artificial intelligence commission to study and make recommendations about policy, and to create a code of ethics to guide the growth and use of AI in the state, Brian Bresland, from the Vermont Society of Engineers and co-chair of the task force, said.
And while the task force is hailing itself as a proactive measure, AI has been in use in Vermont for years. Faraday, the AI marketing company, was established seven years ago. Lely Astronauts have been in use in Vermont for about a decade. Bina48 has gained international attention since her creation in 2010.
Cina acknowledged that it’s not advantageous for companies to approach his task force, or the proposed AI commission, to encourage regulations that could introduce more hoops for them to jump through. The commission is not necessarily aware of all of the ways the technology is being used in Vermont, particularly in potentially problematic ways. This is a concern he said the task force plans to address.
“We need to think about this. How do we get people to participate with us if someone is up to something that is at all unethical,” Cina said. “Unethical people don’t report themselves to the government.”
But Cina said he doesn’t have any present concerns about specific uses of AI in the state.
“I don’t see any specific regulations needed at this time,” Cina said. “What I think is needed is further discussion.”
This cautionary approach to government involvement with AI is also held by James Duff Lyall, executive director of the American Civil Liberties Union of Vermont, who is also a member of the task force. He said that while there have been some concerning uses of AI in the state, like social media monitoring technology in public schools, it’s largely too soon for him to identify any overall risks, despite the task force being in operation for over a year.
Concerns are being raised elsewhere that AI can be a dangerous tool. Some worry that AI algorithms are tracking online decision-making and infringing on online privacy in order to create more manipulative marketing campaigns. Also, AI developed to identify prospective job candidates identified mostly male applicants, raising concerns about implicit biases in the technology.
Police departments have come under fire for employing AI — the New York City Police Department has been criticized for using AI for “predictive policing,” where algorithms collect potentially biased data about neighborhood income levels and demographics to determine where police should patrol. Police have also been criticized for using AI-driven facial recognition software which can misidentify people or identify people with darker skin more often than people with lighter skin.
Law enforcement in Vermont has not embraced AI yet. Burlington Police Chief Brandon Del Pozo said his department isn’t rushing into AI technology. He said the city would first need to establish ethical guidelines and have a community conversation about any technology his department would potentially bring on.
While Duncan, from the Terasem Movement Foundation, understands the need to slowly assess AI’s presence in Vermont, he said there are basic regulations Vermont lawmakers should enact that would set AI on a trajectory of “responsible growth.”
He recommends that lawmakers should require companies that use AI to be transparent about their data and inherent biases, and hold companies legally accountable for any injustices their technology may cause.
“If we just leave this to corporations, we’ve already seen the downside to that,” Duncan said. “Capitalism is not a form of government, it’s an economic model.”
AI technology is developing quickly, and Duncan thinks lawmakers are now playing catch up. But despite this technology’s power, and the murky, regulatory waters Vermont’s legislators are just now wading into, Duncan said he’s excited to see what comes next.
As with any new technology, Duncan believes society will adapt to the challenges — and opportunities — AI presents to humanity.
“Fire used to be terrifying. It used to burn down forests and we used to run for our lives,” Duncan said. “And now we use fire to heat our homes in a Vermont winter because we learned how to work with it, we learned how to regulate it. We even learned how to make it safer.”
“We should still have a healthy respect for what fire can do,” he said. “But we shouldn’t run from it.”
Want to stay on top of the latest business news? Sign up here to get a weekly email on all of VTDigger’s reporting on local companies and economic trends. And check out our new Business section here.