I’ve never understood Turkle’s rationale for this idea.

Now, maybe growing up exercising empathy for something and then being thrust into a culture that denies its inner life and demands that you…

I’ve never understood Turkle’s rationale for this idea. It seems like the empathy we feel for machines is real, the same way that the empathy we feel for animals is real — whether or not that empathy is justified by an inner life is irrelevant because we don’t have access to that inner life — so we are exercising that empathy even when it’s directed at the completely inanimate.

Now, maybe growing up exercising empathy for something and then being thrust into a culture that denies its inner life and demands that you act unempathetically toward it could be damaging, insomuch as children could overgeneralize. Turkle’s solution — avoiding empathy toward things that lack an inner life — seems to be flawed: isn’t it less dangerous to encourage erring on the side of too much empathy rather than not enough? (I understand that empathy can be weaponized — con artists are proof enough of that. But, empathy toward different agents with conflicting needs can even out some of the potential pitfalls, bringing the incentives down to the same risk level as other UI dark-patterns.)