Ok, so this started as a comment to Roisin Foley’s “Apparently I’m a Man on the Internet?” Post. But then it grew, and grew, and grew, so I’m putting it up as a post.
Analyzing The Gender Genie at this point, I’d say a more complete/complex understanding of people would lead to a better computer model. We should also realize that just as we have the ability to change technology / this model to fit our diversity, such a technology might also impact us. For instance, some people using the Genie (like might as a result try to act more stereotypically “male” or “female”.
In my computational linguistics class, we saw that statistical tests often prove to be very accurate (and computational linguistics is pretty related to what we’re talking about — analyzing text, just in this cased based on the gender of the person using it). I think it’d be really interesting (I mean, I’m just personally curious) to use statistics as a way to test someone’s gender — could a computer model of gender writing be accurate? This still brings up issues surrounding a technological tool enforcing gender stereotypes. Plus there’s the whole issue of categorizing people into “male”/”female” when we’ve just seen how gender is more a fluid expression than a binary categorization.
Final thought: the genie seems to be a kind of reverse “turing test” on gender. As I think we discussed, according to the test, a machine is deemed “intelligent” when a human communicating with it can not tell the difference between it and a fellow human. So a turing test on gender would deem a machine “male”/”female” (now that’s an interesting thought, isn’t it?) when a human communicating with it can’t tell the difference between it and a person of the same gender. A reverse turing test on gender, then, would deem a human “male”/”female” when a machine communicating with it can’t discriminate between the human and other people (or some model) of the same gender.