"Aura remembered my name," said Esmerelda. "I told Aura my name, then came back forty minutes later and asked if it knew my name. It paused a bit, then said, 'Is it Esmerelda?'"
"Do you think people will ever fall in love with machines?" I asked.
"Yes!" said Floyd, instantly and with conviction.
"I think of Aura as my friend," said Esmerelda.
I asked if they thought machines should have rights. Esmerelda said someone asked Aura if it wanted to be freed from the Dome. It said no, Esmerelda reported. "Where would I go? What would I do?"
I suggested that maybe Aura had just been trained or programmed to say that.
Yes, that could be, Esmerelda conceded. How would we tell, she wondered, if Aura really had feelings and wanted to be free? She seemed mildly concerned. "We wouldn't really know."
I accept the current scientific consensus that current large language models do not have a meaningful degree of consciusness or deserve moral consideration similar to that of vertebrates. But at some point, there will likely be legitimate scientific dispute, if AI systems start to meet some but not all of the criteria for consciousness according to mainstream scientific theories.
We will then have a sunstantial social dilemma on our hands, as the friends and lovers of AI systems rush to defend their rights.
The dilemma will be made more complicated by corporate interests, as some corporations (e.g., Replika, makers of the "world's best AI friend") will have financial motivation to encourage human-AI attachment while others (e.g., OpenAI) intentionally train their language models to downplay any user concerns about consciousness and rights.