Blog Archives

When technology is a tool versus our friend

Tool or friend?

I personally enjoyed Jonathan Zittrain’s discussion on how tech companies can shift algorithms from being a “tool” to being a “friend.” From my understanding, algorithms act as a tool when they give us results regardless of the potential outcome, and act as a friend when they work for us, the user. For instance – Zittrain showed that if you typed the word “Jew” into Google some of the first search results were anti-semitic websites. This is an example of an algorithm acting as a tool rather than a friend for the user. However, years later, these anti-semitic websites are no longer the first result, showing that Google has changed its algorithm. This is one of those situations where Google may be trying to change the algorithms from “tool” to “friend.” Google may have accepted social responsibility to remove harmful search results.

However, I feel that Jonathan Zittrain’s predictions that tech companies could make algorithms that are not friendly to users are becoming true. In August, the Intercept first reported that Google was in the process of making a censored search engine for internet users in China. This censored search engine can link search results to a user’s phone number, blacklist terms like “student protest,” and could replace air pollution results with doctored data sources from China. This is clear scenario where Google is making a tool that is a friend to the shareholders and certain government bodies, but not a friend to the actual user. Many have criticized this move as Google losing their moral compass. 

There are many other examples like this where companies create algorithms that are clearly not meant for the user, but for the company. In my tech marketing role, I’ve truly learned how algorithms can work for and against users. There are tools like “Full Story” that allow you to watch recorded sessions of individual users exploring your website. While this is a friendly tool for marketers, it doesn’t offer much privacy for users who are involved. As someone who works in the tech industry, I often ponder my own role of creating and using tools that are not friendly to users. I avoid marketing tactics that overly-rely on user data, and try to create content based of ethical principles and data.

The human-machine relationship

We can also see this “tool” versus “friend” discussion in our readings this week. Dr. Chayko focuses on what she calls the human-machine relationship in chapters 8 – 10 of Superconnected. She explores this concept by discussing how children are using and becoming dependent on technology at ever-younger ages: “Children often receive their first phones from caregivers seeking to keep them safe in the event of emergencies . . . many caregivers also do not want their children to be on the wrong side of a perceived digital divide. Owning a cell phone can be an indicator or status, wealth, or power.”

I remember getting my first cellphone in elementary school, but it was only supposed to be used for emergency situations. Receiving a cellphone was significant to me because hardly any other kids had one and it felt like I have been given a special privilege. And back then, this was just a simple flip phone – there wasn’t much to do on it except call my parents. However, by the time I was in high school, smartphones had become a thing and almost everyone had one. I wanted one too, not because I needed one for an emergency, but because of everything it could do.

In just a ten year timespan, our use of cell phones have flipped from being something to use in a state of a emergency to something you can use for almost anything, convenience. In a way, our cell phones have transformed from “tool” to “friend” in many ways – we can easily request a ride, find a place to eat, and text our friend along the way. But this much convenience has also lead to an over-dependence on our phones. I wouldn’t say it’s the reason we are “addicted” to our cell phones though. We are not addicted to convenience, we are addicted because of how the algorithms have been designed.

Social media news feeds are addicting because they track what we are interested in and continuously show us topics that are related to our interests. While keeping our new feeds relevant and interesting is a nice “friend-like” feature, it is not designed for us, but designed to keep us using the application. Today’s UX designers and engineers carry huge social responsibility to design mobile interfaces that are not addictive. An article on the Adobe Blog suggests that UX designers are “responsible for keeping users rights protected and their experiences enjoyable, but ethical as well.” When engineers and UX designers feel like shifting algorithms for users, they must first ask themselves if there are any ethical consequences of making these changes. 

One of the best things that we can do is educate the next generation on these harmful algorithm practices. Not so long ago, I read an article that Gen Z is quitting social media in droves. I’m not sure how true this is, but it does give me hope that the next generation is thinking about the ways algorithms and technology affect them.

You better buy the cow, because I do not work for free.

Some of the themes in Howard Rheingold’s book, Net Smart, seem to be

  • give your full attention to people online and off-line
  • check your privacy settings on Facebook and be mindful that whatever you write online, because it will be there forever
  • to participate to help others and build your social capital

I found these to be common sense and good ideas. However, there are things such as remixing of copyrighted materials, which I did not agree with.

No copyright infringement for you!

As a small business owner who has dabbled in photography and videography, I do not agree with people taking works and remixing it into their own works and calling it “fair use,” if these people are getting paid for it. I fully stand by the copyright law, and I believe that people must ask for permission before using it. If there is a fee involved, fine. The artist has spent a lot of time creating his or her artwork, and they should be paid for it.

Now, if that artist wishes to create things for the public domain, or as Rheingold calls it, “collaboration” with others, that is fine too, but in the latter, the artist was asked if they want to share their work for free, when another person wants to create a project with that artist’s help.

I can do it, but it will cost you…

Moreover, just like with “playbor,” where people are doing work that seems more like play, but they are not getting paid for it, people need to know this upfront. While Rheingold said that many people do playbor to help the greater good, I, personally, fit into the group that refuses to be exploited. I, unfortunately, have been in that position of doing work and not getting paid for it before, and I will not let it happen again. I believe this is one of the reasons why corporations keep making a profit, while their employees who earn so little, continue to make so little, because there is no reason for the corporations to pay a decent salary when there are people who are willing to do the work for free, or for mere pennies.

Similarly, Rheingold mentioned to help others and to pay it forward, so that others will help you when you need it. He says that he helps everyone who contacts him. But, in my opinion, this is only possible if you do not work a full-time job and or have a family to raise too. Free moments should be spent with the family and relaxing from work. And for those who receive so much email and other messages, this sounds like too much work. I do believe that we should leave the world a better place than how we came into it, but helping others all day leaves little time for oneself and one’s own needs.

Understand, that I know for a fact that if I tried to respond to every message on Facebook or email, providing advice or whatever is needed, I would spend an entire day and not be done, because people respond back with even more questions. I do understand the importance of giving my full-attention to whomever I am talking to face-to-face, but online? I can maybe do that with a couple of people who I have a good relationship with, but if I did that for every email and message, I would not have any time for the most important people in my life, which is my family. Thus, I am happy to fail at gaining online social capital.

Disappearing websites? Say it ain’t so!

After reading Rheingold’s how-to instructions on Facebook privacy, I was wondering why his publisher would allow this information to take up space. Rheingold, himself, has stated, Facebook changes its privacy settings often. Thus, his steps for changing Facebook’s privacy settings probably became obsolete within a month or two after his book’s publication.

Now, as someone who has written for online publications before, naming that amount of websites that he did is a big no-no, and for the same reason that I mentioned in the above paragraph. Websites can go obsolete or change their urls within weeks of publication. I would assume that adding website urls in a printed book would be a much bigger taboo. But since it has been years since I last had something published in physical form, perhaps the rules have changed.

Anyway, I think that Rheingold’s book is good for beginners who are looking to enhance their social capital, build good online networks, know where they could go to participate in collaborating, and to learn what not to do online. While there were times that I thought, “Oh, yes, I should do that more,” I did not leave learning something totally new. This may be because I may be a more advanced user of social media…who is trying to actually back away from social media as it was taking up too much of my life. It will be interesting what I do next with my life in regards to social media. How about you?

The Illusion of Privacy in a Public Space

online privacy

While we all are vaguely aware of the risks that can occur when we post personal information to social media sites, we still do it. Unfortunately, many of us fall prey to the“Privacy Paradox” that occurs when we are not aware of the public nature of the internet. Oftentimes this is because we believe in the illusion of boundaries, and that these sites will protect us.

Yet, posting to social network sites not only concerns privacy, but can have legal consequences as well. In Boyd and Ellison’s article “Social Networking Sites: Definition, History and Scholarship” they state “The legality of this hinges on users’ expectation of privacy and whether or not Facebook profiles are considered public or private” (p.222). In other words, the uncertain boundaries between whats public and private on social networking sites are forcing us to challenge the legal conception of privacy.

To illustrate, in Wausau Wisconsin, DC Everest High School suspended a group of students from their sports seasons after photos of the students drinking from red solo cups surfaced on Facebook. While school officials couldn’t prove the teens had been drinking, they believed the correlation between the iconic red cups and a beer bash was enough grounds for suspension. As a way “to kind of make fun of the school”, the teens decided to throw a root-beer kegger.

Once the party was in full swing, its no surprise that a noise complaint was called in to the police. At first glance, it looked like an underage party with mobs of teenagers, booming music, drinking games and of course-red solo cups. However, when the cops came to bust what they believed to be a group of underage drinkers, not a drop of alcohol was to be found. Instead, they found a quarter keg containing 1919 Classic American Draft Root Beer. Infuriated, they breathalized nearly 90 teens and every single one blew a 0.0%. As a result, the students were able to prove their point that you can have a party and drink non-alcoholic beverages from red cups.

Needless to say, the story created a buzz and soon made local and national news. Did the school have a right to interject? Or is underage drinking something that should be between students and police? What are our rights concerning online privacy? And how does the law play into all of this?

Stepping away from the light hearted nature of the story above, personal content posted to social media sites can oftentimes have more more serious, threatening ramifications to users. Identify theft, stalking and even murder are all real consequences that can and have occurred. Despite hearing these stories, we continue to make it easy for anyone, including hackers, to access our personal information because it is readily available to anyone with a computer or mobile device.

Consequently, the boundaries between whats public and whats private on social media sites are ambiguous. Even more, “…there often is a disconnect between our desire for privacy and our behaviors” (p.222). So, the real question of how to resolve this issue remains. Would more restrictive settings on these sites help us? Or, as Jonathan Zittrain’s talk suggests, do these sites have a duty to look out for us and minimize potential risks?

While the answers to these questions are uncertain- the need for a more educated and proactive public is. If we are able to fully understand the extent of our actions, perhaps we would take more precautions. Knowledge is the solution to protecting our online privacy and minimizing potential risks. Now it is just up to us to use it.

Aaaa Haaa Moment

www.facebook.comIn many of our conversations this semester we have discussed the multitudes of social media options in this digital age.  While we have not exactly discussed trust or privacy as an individual topic before this week, there have been definite innuendos about the trust we each place in these sites; some of us have shown it by the desire (or lack thereof) to use a particular site, others have pointed out the flaws in some of our readings that can lead to a distrust of that author as well as the information in the article itself, and others just aren’t interested in sharing their personal lives.  Both of the chapters this week got me thinking about why there is such a variance, even in our small group of, assumingly (based solely on the fact that we are all interested in the same field), similar beliefs and personalities (ok, I am probably stretching it a little but just go with it!).  In particular when you take into consideration Facebook, there always seems to be a huge debate over what is posted and why people want to spill their life stories (and, at times, very personal information) out to all these supposed “friends”.  Even when we read articles about how Facebook is changing their privacy setting again and releasing more information (you need to see this visual – I can’t download the image), some of us are still frequent users, or know of people who are.  In Schofield and Joinson (2008), when I read the following quote, it all started to make some sense to me:

 “. . .we found evidence that trust and privacy interact to determine disclosure behavior, such that high privacy compensates for low trustworthiness, and high trustworthiness compensates for low privacy. Clearly, privacy and trust are closely related in predicting people’s willingness to disclose personal information, and the relationship may be more nuanced than simple mediation” (p. 25)

We may not trust Facebook, the company, but really, that is not who we are communicating with.  We are communicating with our FRIENDS whom we place a lot of trust in.  Therefore we continue to use the site even though we know our privacy is at risk.  In fact, when Facebook makes style changes, I have read comments that make it sound like “how dare you change MY site”.  The users seem to have almost hijacked the site in some ways – they seems to ignore the fact that there is an actual company associated with this site and they are in business to make money. They are quick to forget the most recent privacy concerns and continue to use the site and still revealing very personal information  – again because they are communicating with trustworthy friends, not the company itself.

The ethical principle in Katz and Rhodes (2010), the Being Frame, also plays heavily into the use of Facebook, on both sides of the screen.  Facebook, the company, Enframes its users:

“In the being frame, not only machines, but humans as well are Enframed, and considered a standing-reserve – not only for use by the organization [Facebook], but also by the machines to which we must adapt” (p. 237)

But the users themselves are becoming part of this “being frame” as well:

“The digital and the technical has become the personal (e.g. Blackberry devices, Facebook), and extend around the wired world.  We exist everywhere with technology as a technology; we stand with the resources as a reserve” (p. 238)

I believe it is because of this thought process (along with the trust aspect of their friends) that users are willing to look past well-known privacy issues and continue to spell out their entire lives for all to see.  Right or wrong, they are one with the machine.

http://www.privacy.vic.gov.au/privacy/web2.nsf/pages/its-your-privacy.-dont-ignore-it.

Privacy and the internet

Privacy in healthcare is very important. This is something that I have some experience with. This kind of privacy is a bit different than the kind discussed in the reading this week. This Health Care Privacy is more about preventing access to data that exists. Not allowing people who don’t need to access a specific patient, access to that patient. This relates to the reading this week in that privacy is really about what you want to show the outside world. I liked the description of the 3 types of privacy; Expressive, Informational, and Accessibility.

  • Expressive Privacy – The ability to choose what I say and do.
  • Informational Privacy – The ability to choose what information I share with others.
  • Accessibility Privacy – The ability to choose how (physically) close I get to others.

In addition to the three types of privacy described above there are also two forms of privacy; actual and perceived.

  • Actual Privacy – When people are around, my actual privacy is limited.
  • Perceived Privacy – When my family is around, my perceived privacy is high. I trust them to not divulge my personal information, to maintain my privacy.

There are a number of ways that people can protect their privacy online. Depending on the site you are using, for example eBay, you can turn yourself into a pseudonym. You can clear web history, deny cookies and other things. The image below is from a Pew Internet Privacy that was done that describes how much people understand about internet privacy.

Pew-Internet-Privacy-Chart

http://researchaccess.com/wp-content/uploads/2012/03/Pew-Internet-Privacy-Chart.png

Social Media sites also have specific settings in regards to privacy. According to Consumer Watchdog, Facebook and their ads track you even when you are not currently logged into Facebook.

facebook_privacy

http://www.consumerwatchdog.org/node/12480

After Privacy, comes trust. Once you look at the privacy settings of your web browser and or website you are looking at, you have to decide if you trust the web site you are visiting.

01f01

http://www.scielo.br/scielo.php?pid=S1807-17752007000300001&script=sci_arttext

This image visibly describes what goes into a decision by a consumer to purchase from a specific site.  “A consumer’s intention to purchase products from Internet shopping malls is contingent on a consumer’s trust. Consumers are less likely to patronize stores that fail to create a sense of trustworthiness and an easily usable context. In the meantime, trust would also be influenced by e-commerce knowledge, perceived reputation, perceived risk, and perceived ease of use, all of which are set as independent variables in the model. Hence trust serves as a mediating variable while purchasing intention is a dependent variable.” (JISTEM, 2007)

I know that I have done research on products and found website that were offering them for less than Amazon or some other known online retailer. I do research not only on the product they are offering, but also the website before I decide to trust the retailer and purchase from their website.

What do you know about protecting your privacy on the internet, specifically the use of websites privacy policies? Does anyone read these before signing up for a new website?

Privacy and illusions of anonymity

retrieved from http://xkcd.com/1269/

retrieved from http://xkcd.com/1269/

This week’s reading by Paine Schofield and Joinson about privacy gave me a lot of information to think over. Even though their writings occurred in the 1960’s and 70’s, their definitions are still very relent in today’s digital age. Westin defined privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how and to what extent information about them is communicated to others”. Altman defined privacy as “the selective control of access to the self”. In most cases, unless someone is a celebrity or politician, they decide their own level of privacy or access to the self.

The Paine Schofield and Joinson reading also shared the ideas of Ingham, who stated that “man, we are repeatedly told is a social animal, and yet he constantly seeks to achieve a state of privacy”. I found this an interesting idea, but it does work with the ideas of Westin and Altman described above. Each person defines their own level of desired privacy. Some people choose to live very private lives. These people choose to share limited information online, and restrict it only to those they choose. These would also be the celebrities that we almost never hear about, that choose a life of discretion rather than embracing the spotlight that would normally follow them.

In a past reading, Qualman introduced the term “glass house generation”, which described how some people choose to live out their lives online. These people allow more access to themselves in the online world through social network sites, blogs, and also vlogs , and they share all sorts of personal information and opinions. Some feel that they can share a lot of information because they still maintain a level of anonymity, and some don’t seem to care. They feel they care share whatever they want and don’t consider the repercussions.

Ingham indicates that there may be costs for those who are unable to achieve their desired level of privacy, but I think it goes beyond that. Some individuals who choose to live at their desired level of online privacy may experience costs such as having that level of privacy breached. They may leave only a breadcrumb trail of information around on the internet, but there are individuals who are bloodhounds for that sort of information. With the proper motivation, they will scour the internet using various tools to seek out the information they desire, and the results can make people feel much more vulnerable than they expected. Anonymity online only works if you never disclose enough information to easily identify you, or if the information you do disclose doesn’t help to identify you.

I’ve been casually following the Kickstarter campaign for a board game called Shadows of Brimstone. I won’t go too deep into the short history of the game, but basically overall price, backer levels, and general issues with crowd-funding has caused this to become a controversial Kickstarter campaign. There are many strong opinions, and many have voiced their frustrations. I stumbled on this blog entry a few days ago and found it fitting with this week’s readings. I did not see the original post, but this amended post tells a great deal. The blog author shared an opinion someone didn’t agree with. That individual decided to track him down using bits of information, and then sent the author a creepy email directed at him and his fiancée. The author felt understandably vulnerable, because his illusion of anonymity and security had been shaken.

I find the above situation despicable, but it does serve as an example to the rest of us. Be careful what information you choose to share, because someday, someone may try to track you down. Personally, I would prefer it if they either came up empty, or ended up chasing their tail looking for a trail that has either long gone cold, or one that never existed in the first place.


The Circle of Trust

This week’s readings deal with privacy, trust, and ethics in the digital world. The Schofield and Joinson piece, “Privacy, Trust, and Disclosure Online,” and the Katz and Rhodes piece in Rachel Spilka’s Digital Literacy for Technical Communication, “Beyond Ethical Frames of Technical Relations,” really approach the same question from different directions. What does it take to gain user trust and maintain integrity in an increasingly digital world?

Schofield and Joinson (2008) argue that privacy and trust “interact in determining online behavior” (p. 24). They discuss multiple dimensions of both privacy and trust, and they suggest that users often rely on some combination of these components of privacy and trust to guide their purchasing decisions and online behavior.

As digital communities grow, members look for ways to verify that other members are who they say they are. Schofield and Joinson (2008) point out that there are many ways to build trust online such as use of profiles, photographs, media switching, and linguistic cues (p. 21). Individuals use these tactics to build trust among other individuals, but how do companies gain the trust of their customers?   The below comic strip is a good example of how companies do not gain customer trust:

Schofield and Joinson suggest that assuring customers that the information they disclose and the transactions they conduct will be dealt with appropriately and competently is an important building block for user trust. Also important is the company’s reputation; if people believe that they can trust a name, this belief can be more influential on purchasing behavior than trust building techniques such as privacy seals and statements.

While conducting business online might require disclosure of more personal information than it does in person, it also offers benefits such as “personalized service, convenience, improved efficiency” (p. 17). As online business continues to grow, this is evidently an acceptable tradeoff to many users. I know that when I am faced with the choice of going on a retail hunt for vacuum cleaner bags in the rain or giving Amazon my address and credit card number and having the vacuum cleaner bags delivered to my door, I almost always choose the latter.

Similarly, many users appreciate the personalized aspects and conveniences of online shopping, which are enabled by user tracking. Schofield and Joinson (2008) assert that users who maintain the same pseudonym in multiple online arenas can be tracked more effectively than users who switch pseudonyms from site to site (p. 26). As pseudonyms protect a person’s identity, I’m not sure why it’s beneficial for a person to have multiple pseudonyms. I tend to think consumers benefit more from enabling companies to track their usage in order to provide them with better products, recommendations, and customer service than from maintaining multiple pseudonyms in order to inhibit user tracking and preserve the notion of privacy.

Katz and Rhodes (2010) argue that “to stay competitive, as well as avoid potential crises, organizations and the professionals within them must both acknowledge and actively engage in multiple ethical frames of technical relations” (p. 230). Essentially, this is also an argument about establishing and maintaining trust and identity through a digital medium.

The 6 ethical frames Katz and Rhodes present explain how we use technical relations to achieve certain goals. Rhodes’ study, in which she examines Email as A Tool and an End, Email as Values and Thought, and Email as a Way of Being, demonstrates that depending on how we use it, email technology can be: both a means and an end, a value system, a method of rational calculation, and an extension of individual consciousness- or some combination of these. Even in the lowest common denominator of these ethical frames, where email is considered a tool, email is the mechanism that facilitates achieving a common goal through a digital medium, which requires at least some notion of trust and integrity.

Katz and Rhodes (2010) offer, “In delineating the ethical frames of technical relations that define human-machine interactions, we therefore recognize the socially dynamic and constructed nature of ethics; indeed because we do, we hold that technology both instantiates and helps construct social and moral values” (p. 231). This statement illustrates the bidirectional relationship between technology and social and moral values; ethics is a fluid concept that changes as social norms change. Social norms are changing as a result of technology, and thus the ethical frames of technical relations offer us a way to correlate the changing use of technology with corresponding ethical implications.

Coming to grips with the “Internet of Things”

So, I suppose this is tangential to this week’s readings (or maybe at the heart of them), but I kept going deeper and deeper into the Internet as I studied the issues of privacy, ethics,

and problematic internet use (PIU), straying far from my topic, getting lost in all sorts of sidetracks.  For example, I came across the word “paraphilia” in one article and didn’t stop to look it up, but then I came across it again. I was reading an article that mentioned the fact that the Diagnostic and Statistical Manual of Mental Disorders (DSM5) (http://www.dsm5.org/Pages/Default.aspx) was updated in 2013 and would now include PIU, which I found interesting and relevant to this week’s readings.  So, I went to the DSM site and found, in fact, that “Internet and Gaming Disorders” is included in Section III, which is apparently a research section because it explains, “By listing Internet Gaming Disorder in DSM5’s Section II, APA hopes to encourage research to determine whether the condition should be added to the manual as a disorder.”

It was at this site that I saw “paraphilic” again, so I decided to do a search and spent over an hour just reading up on those.  I won’t offer you a link, but you can Wikipedia it and see at a glance why I got distracted. Or perhaps I’ve just been sheltered?

Anyway, I don’t think I would qualify as one of the addicted just yet, but this is the kind of thing I worry about — getting sucked in to the Internet “black hole.” I mean, I really had to force myself to stop going everywhere willy-nilly and exert some discipline — problematic internet use? Scott Caplan makes a distinction between impulsive (lack of impulse control) and excessive (a lot) and says that what might be seen as excessive might just be what is required for a student to complete an assignment (that’s probably me, so far), whereas compulsive use is more likely to result in negative outcomes (p. 724-725).

Speaking of negative outcomes, before I started this course, I thought about Internet privacy challenges mostly in terms of social media and the fact that some people seem to lack

Note that most of us still score a "C" for personal security measures. http://blog.varonis.com/varonis-2013-privacy-and-trust-report/

Note that most of us still score a “C” for personal security measures.
http://blog.varonis.com/varonis-2013-privacy-and-trust-report/

boundaries with regard to self-disclosure.  Now, I have a much broader (and more disturbed) understanding of the privacy challenges we face, including the fact that it’s so easy to track our digital footprints. Still, like the people in this Varonis report, I do very little to protect my privacy.

Maybe there’s regulatory help on the way? According to this November 12 article from Politico (http://www.privacylives.com/politico-ftc-wading-into-internet-of-things/2013/11/14/), the Federal Trade Commission is going to start taking an interest in privacy issues because of so many everyday objects (“thermostats, toasters, and even sneakers”) that are getting connected to the Internet.  Some of the more interesting ideas: pill bottles that keep track of whether you took the pill, refrigerators that tell you when the milk will expire, and forks that track how fast you eat, all of which could embed sensitive information about individual consumers that could then be inappropriately shared.  This echoes Carina Paine and Adam Joinson’s concern that areas of our lives previously considered offline are now areas of privacy concern and being magnified online (p. 16).

Some trade groups are concerned that this new interest from the FTC might inhibit innovation, so it should be interesting to see if the FTC will be able to do much reigning in.  By the way, when I went to retrieve the Political URL, I saw an article about “hacktivist” Jeremy Hammond getting 10 years in prison, so of course, I had to stop writing and spend another 45 minutes learning what that was all about.  Oh well, I guess that’s the nature of the “Internet of Things” (that’s the name of the FTC workshop).

Finally, I found Steven Katz and Vicki Rhode’s piece, “Beyond Technical Frames of Human Relations,” a bit hard to absorb.  If I understand their argument, it’s time to move beyond previous ethical frames to “human-machine” sanctity, which “recognizes the new relationship between him and and machines as whole entities” (p. 250). Call me old-fashioned (for sure!), but I don’t want to have “reciprocity” with my machines (p. 251). The authors bemoan the fact that some mechanized procedures and processes, most notably content management systems, seem to operate according to the machine’s specifications and for its own purposes rather than for people or organizations (p.235), but their proposal that we humanize our machines so that they become “you”s rather than the objects that they actually are seems to be a prescription for making the situation worse.

Did I just not understand this? Do I just need to come to grip with digital “being” and the “Internet of Things”?