Category Archives: Technology
In Superconnected, Mary Chayko discusses the inception of Google. It was developed by Stanford PhD students Larry Page and Sergey Brin and revolutionized the internet when the search engine became publicly available in the late 90s and created algorithms in the early 2000s. Today, Google is the world’s leading search engine.
“At the same time that it produces results for the user, Google also stores, caches, and archives large portions of web content as the web is being searched…Apple, Microsoft, Facebook, Yahoo, and other major tech companies also allow the data that flows in and through their platforms to be mined and in some cases participate in the mining. As a result, nearly everything that is done on the internet is tracked, analyzed, stored, and then used for a variety of purposes,” Chayko writes.
Google Accumulates Power
In May of this year, Steve Kroft of the TV news magazine 60 Minutes reported on the power of Google and critics who say the company, worth three quarters of a trillion dollars, is stifling competition. Google, which is owned by the holding company Alphabet, went public in 2004. It has also bought more than 200 companies including YouTube, the largest video platform, and Android, which runs 80% of smartphones.
In the 60 Minutes story, Gary Reback, a well-known antitrust lawyer, says Google is a monopoly. He says it’s a monopoly not only in search, but also other industries such as online advertising. Plus, Google accumulates information about users and sells that information to advertisers. He points out that people tell search engines more than they tell their spouses, giving Google a “mind-boggling degree of control over our entire society.”
The Business Insider reports Google is also a major player in the news industry, surpassing Facebook last year as “the leading source of traffic to news publishers’ websites according to Chartbeat…the majority of traffic to publishers’ websites from mobile devices.”
Google Dominates its Competition
Also, in May, the Wall Street Journal’s Christopher Mims wrote about the growing demand to break up the monopolies of Google, Facebook, and Amazon. He writes, “…as they consolidate control of their markets, negative consequences for innovation and competition are becoming evident.”
Jonathan Taplin, a digital media expert, says in the 60 Minutes story that Google has no real competition because it has 90% of the search market and Bing, Microsoft’s search engine, has 2%. The co-founder of Yelp, Jeremy Stoppelman, points out that Google has changed its search results over the years so that instead of returning the best information from around the internet, results at the top of the first page are often from Google properties. Google lists results from its own data first such as maps, restaurant reviews, shopping, and travel information. This is especially important when many users are viewing results on the small screen of a mobile phone.
Google Faces Regulation
Google has been fined by the European Union for anticompetitive actions. Over the summer, the EU slapped Google with a $5 billion fine. According to the Business Insider, the EU ordered Google to stop using its Android operating system to block competitors. Google is appealing that fine. Last year, the EU fined Google $2.7 billion for illegally promoting its shopping search results over its competitors.
The U.S. government should follow the example of the EU and provide more oversight of Google and other tech giants. It’s clear that Google is a powerful force in society, and with the company’s dominance comes the need for transparency and accountability. Recently, Google, Facebook, and Twitter have been called to testify and answer questions at U.S. Congressional hearings regarding Russian interference in the 2016 U.S. elections. An Axios article by David McCabe had more ideas on how the government could provide oversight:
- Require Google to release more information regarding its algorithms
- Make it easier to sue big tech companies like Google
- Designate it as a “common carrier” which would allow the government to appoint a body to oversee Google
All of these options should be considered, and more should be done to make sure Google and other powerful tech companies do not wield too much influence over our lives without our knowledge and consent. It should be noted that I relied heavily on Google to research this blog post.
This week’s readings included many interesting topics; however, like many in elder-care facilities, Paro played with my pathos and had me reject reflecting on logos. That statement may not be entirely true for caring for our elderly is logical as well as emotional. I had never heard of Paro , My Real Baby, Nursebot or Wandakun; however, I have little experience in nursing homes or elder-care.
It seems logical that ” there are not enough people to take care of aging Americans, so robot companions should be enlisted to help” (Turkle, 2011, pg 106). Although Turkle initially had resistance to how the word “care” was used, she eventually accepted that these caring machines/robots have a place in today’s world. Of course that decision came after interviewing nursing home patients who were “cared” for by these robotic companions. Plus, like Michael Sandel’s graduate students, Turkle considered how “robotic companionship could lead to moral complacency” (pg. 124).
I began reading this chapter a couple of weeks ago, but soon put it down, for it made me think of my grandmother who died after an 8 year battle with Alzheimer’s. Last week I decided to delve further in the chapter and began to see the benefits of these robots. As Turkle reports, “one nursing home director says, ‘Loneliness makes people sick. This could at least partially offset a vital factor that makes people sick'” ( p. 109). She then shares information about various nursing home residents and their relationship with their robotic companions. The elderly felt comfort, caring, purpose and much more when interacting with their Paro or My Real Baby.
When my grandmother was in the nursing home, she had her room filled with dolls and stuffed animals. She talked to them and told them stories. On my last visit, I just watched her take care of her babies, for she no longer knew who I was (she pointed to a picture she had taped on her wall of a little girl and said, “this is Lani–not you.”). Ironically, she was telling her dolls and babies about her grandkids. She talked with so much love and affection about us– I had never seen her like that before, for she was an old German woman who felt one shouldn’t show emotions or be sentimental. However, at this mental state, those walls were down and she was just telling a story about her grandkids, as if she was a kid right along with them. I am quite sure she subconsciously knew who I was, for before I left, she said, “I don’t know who you are, but I know I love you.” That is the only time she has ever said that to me.
Those dolls and stuffed animals did for her what the robotic companions did for the people Turkle spoke with– it allowed them to feel and possibly express themselves in a way they couldn’t do before. The companions stimulate their minds and emotions– keeping their brains active and allowing them to feel closeness with others even when they are not with their loved ones. Those companions are worth any price tag!
As I read Dave Clark’s “Shaped and Shaping Tools,” I was immediately brought back to rhetorical theory class with Dr. Dana Heller at Old Dominion University. I envisioned the chalkboard (yes, that long ago!) with drawing about sign, symbol and signifier of de Saussure and interpretrent, representamen and object of Charles Sanders Pierce.
Of course, I teach my students how to write a rhetorical analysis in some of my composition classes, but we usually don’t delve into theory, so I enjoyed reading about it again in another graduate class, albeit 20 years later, and to learn about applying it to technologies. According to Clark, rhetorical analysis is “a loose grouping of related types of work that share a common goal: complicating common-sense understandings of technologies by analyzing them from a variety of rhetorical perspectives that demonstrate their immersion in social and rhetorical perspectives that demonstrate their immersion in social and rhetorical processes” ( 2010, pg. 92-3). Clark discusses how the classical rhetorical approach can be effective; however “Johnson suggests that as a field we must argue for a rhetorical approach to technological design and implementation that places the users, rather than the systems, at the center of our focus. . .(2010, p. 93). I agree, for when I teach my students about technical writing, I have them focus on audience, purpose and context. This line of thinking done before drafting is similar to one who designs and builds technology. Those designers must consider the user, their purpose and the context of which they will use that technology. When I have my students write website reviews, they critique the design, function, userability, etc. as it relates to the user. These reviews are written for a website designer in order to make the website more appealing and functional for the users.
If one is going to create technology, it is only logical to consider the audience who will use that technology, how they will use that technology and with whom they will use that technology. Therefore, activity theory considers groups and individuals who “are analyzed with a triangular approach that emphasizes the multidirectional interconnections among subjects, the mediational means or the tools they use to take action and the object or problem space on which the subject acts” (Clark, 2010, 98-99).
So, since technology emerged and reshaped man’s ability to communicate and complete tasks, the rhetoric of technology had to emerge and be shaped to meet the more complex world we live in. There is an obvious correlation between classic rhetorical theory and activity theory of technology today.
Technology today is embedded in our lives and we need to examine the contexts in which we rely on them in order to understand, assess and design them in order for ease and use of their users.
Hart-Davidson hits the nail on the head, Content Management Systems (CMS) “do not do that work by themselves” (p. 14). A CMS can give a company what they are willing to put into it. They are not a solution, they are a tool. They are exactly what we make of it. Hart-Davidson states that “technical communicators typically come to play many different roles and deploy diverse sets of skills over the course of a career” when using CMS (p. 134). The roles mentioned must be assumed, but to successfully integrate the CMS into the company, the company must also integrate one or more company processes into the system to really benefit from it.
Training or some kind of education on how the company uses a CMS is a key to success. I’ve used quite a few systems and have seen excellent and poor uses of them in companies. When companies don’t have any rules around how a CMS is used, it becomes a free-for-all of good and bad information. It’s confusing. There is a plethora of online content available online for learning how to use and manage CMS systems online. However, even if you know how to use the system, this may not be how the company uses it. The video below only touches on some common mistakes in administrating SharePoint itself and it’s over an hour long.
Michael J. Salvo and Paula Rosinski both discuss “mapping” and “signposting” in information design (pp. 112-114). These concepts are a big part of UX and extremely important to ensure users can become literate in a system. I’ve found these levels of user interface designs are not well applied to most CMS. At one of the companies I worked for I had to redesign the front-end of a SharePoint site to make it more accessible and simplified for others in the company. This tells me that we have a long way to go in our design of CMS from a design perspective. Confusion in using the interface itself will almost surely create inconsistent data, especially when most people will have access to the system.
Process in how you use a CMS is key to making the system useful. Yes, it can allow versioning of documents, but when people are not required to update or sign off on documentation, it can create data that looks trustworthy but is not. Most systems have workflows integrated into them, but unless going through that workflow is a part of a sign off process for the deployment of a product, then why would people go through the hassle?
To make sure our documentation is trustworthy, my team and I will link our documents to specific releases of software. This way it will be clearer in what context you can assume a document may be relevant for. In terms of metadata we make sure that everything is under our team’s section in the system. We also have the option to tag certain customers if the document is specifically relevant to that context. The process we employ around this ensures that we do not have to continually maintain every document, but instead deploy documentation at our own pace and as needed.
I don’t think I could live without a CMS at a company these days, because the alternatives are much worse. But literacy in these systems remains a problem. This is probably due to the fact that the users are not the same as the customer. Additionally, I see many systems treated as a golden solution instead of a platform. It will be interesting to see how these systems and their usages evolve over time.
I found this week’s reading fairly awkward as it included software engineers as technical communicators. Software Engineer is a very misused term to begin with. Rachel Spilka’s book gave me the feeling that they used to be more document centric, but now they are more jack-of-all trades developers and managers, sometimes dev ops, and sometimes just programmers. A lot of software industry titles trend towards a jack-of-all trades type of job, hence the new title “Full-Stack engineer”. Full-stack engineers are usually developers who know all aspects of how to build a web application. Why pay multiple people when you can get just one that knows how to do everything? Initially, a technical communicator sounded like a far fetch in the software engineer’s knowledge tool-box.
When I was studying for my computer science degree, most professors seemed to verbally accept the fact that most of us were just not going to be gifted in the writing department. It was not a required or emphasized aspect even though I had a software engineering emphasis. In the industry, I cannot disagree with this either. Most legacy code I have worked on is not documented from the technical side at all. It’s not always because of talent or ability, but honestly the last thing most of my colleagues want to do after coding is sit down and write sufficient documentation for days after that. Additionally, one extra line of code has the potential to change most or all of a document on the system functionality. Documentation is looked at by our management as a nice to have, but it’s not a show-stopper if it’s not there. We are never interviewed on our writing skills. This first-hand knowledge made me raise an eyebrow when Spilka listed software engineers as technical communicators from the late 90’s to now.
What I realized part way through reading was that the documentation Rachel Spilka is referring to has changed just like how the job titles have changed. The documentation that a software engineer will generate is kind of dynamic and is not always a formal breed of documentation. Spilka states a couple times in the book that the job of technical communicators has changed audiences, that they have changed from being experts to novice. It seems to me that the responsibility for creating power user documentation has been assumed primarily by software engineers, architects and system engineers, while technical writers create more customer-facing or public documentation.
So, how do software engineers document? We document when we want to ensure that we don’t have to work more than we want. The documentation that we do produce is aimed at fellow engineers so we don’t have to repeat ourselves too much when new people are hired or start working on what we have already built. We also document for production systems for installation and troubleshooting guides for when things go very wrong. Both of these types of documents we call “playbooks” for our engineering sector. These playbooks seem very similar to the initial documentation that was created by technical communicators in the 70’s (Spilka, R., ed., 2010, 22).
We keep these playbooks on a content management system that is accessible by the entire company, so if they want they can just go to our page and try to find the answer to their question before talking to us. We can also receive comments on the content management system so that all discussions on the documentation are public. Sometimes the documentation just looks like notes and sometimes it looks like a proper installation document depending on its purpose. We also document even less formally by creating static and dynamic charts and graphs for the design of our system. These can be the most useful in explaining functionality to other software engineers. We also document by putting comments in code to explain exactly what we are trying to do algorithmically. All of these forms of documentation fully take advantage of the technological changes that have been granted to us to make technical communication more efficient.
This book was written in 2010 so I feel like a revision could occur to navigate even more technical communication responsibilities in businesses today. For example, System Engineers have a huge role in technical communication between all components of a technical product. I feel like this specific role could be very helpful in identifying where some of the technical communication responsibilities have been dispersed in today’s world. Spilka does mention that the content would probably be irrelevant for the types of companies that I work at. Additionally, every company is vastly different in how they incorporate technical platforms and integrate with engineering processes. I can only imagine the challenges Spilka encountered in trying to compile the history of technical communication.
This is my first course for my certificate requirements. I wasn’t totally sure I would “fit” into the MSTPC program since my background is literature, and I have limited experience with technical writing and media. I saw it as a challenge of my boundaries of knowledge. However, as a reader of some of the class material, I felt I was not part of the target audience since I am not familiar with technical writer jargon etc. Of course, if a reader cannot relate to the material, it is a struggle to maintain interest and focus. Nonetheless, I kept on reading. As I was reading Blythe, Lauer and Curran’s “Professional and Technical Communication in a Web 2.0 World,” I began to relate, to focus and to reflect.
I teach mainly composition at a technical college, yet we still devise our composition classes as if they were for a four-year college. I have had some of my students complain about having to take one writing class since they felt it didn’t pertain to their program. Of course, in the end they understand that any writing genre (mainly essays) will help them communicate more effectively in their careers. However, the set curriculum may not be sufficient if many of my technological-minded students are going into careers where more technical writing would be the norm.
A student who graduates from a technical school is more apt to be required to write similar forms of communication as mentioned in Blyth, Lauer and Curran’s report. Figure 1 (Blythe, Lauer and Curran, 2014, p. 273) lists research papers only on the bottom of the type most valued column; whereas, emails, instruction manuals, websites, presentations and blogs are at the top of both the list of most often used and most valued. So, perhaps I can begin making changes in my courses to meet the future needs of my students.
I am not discounting the value of essay writing and the objectives of our mandatory writing courses, for it does require the skills needed to do many of the more technical forms of writing. However, perhaps exposing students to other genres of writing would be beneficial in that it may attract the interest of a more tech-savvy (or interested) audience and may lead students to feel like they are getting more out of their course that they can apply directly to their programs and future careers.
Perhaps being a student again (not originally by choice) has reminded me of how my students feel when entering my required classes. Plus, this class is broadening my understanding of writing and the value of different forms of communicating in today’s technical world. Hopefully, my students will feel the same.
Blog, blog, blog. . .
I have never blogged, nor found interest in blogs. Perhaps this was largely due to time constraints, but I am also sure it was due to my personal bias toward blogging, for it seemed to me that many used it to vent. I thought of blogs as more of an online personal journal.
The Writing Process
Many of my students blog, so I decided to use the following video about writing a blog as a way to connect with my audience, and show them that writers don’t just write– they follow a process.
Audience, Tone & Context
In addition, to sharing the above video about writing a blog, we also discuss audience, tone and context. Since the professor in the video is Canadian, that alone opens a discussion on audience, tone and context. So, we also evaluate the professors choices in devising this video.
After doing activities like this with my students, I realized I needed to change my attitude about blogging. My goal as a writing instructor is to get students to write– even if they are writing blogs. Most likely they will enjoy the process more since it isn’t a traditional “essay.”
Has the democratization of the Internet turned us all into Kafka-esque cockroaches? Andrew Keen argues yes in his debate with David Weinberger. From Keen’s perspective, the Internet has stripped away traditional filters and given a voice to the masses — and the resulting clamor shows the worst of humanity. Instead of having gatekeepers in the form of publishers and traditional media sources to groom experts and present us with the best, the unaware Internet user is bombarded by amateurs and their trash.
Image from Books by Audra. http://www.booksbyaudra.com/2016/04/18/considering-kafka/
Weinberger takes the opposing viewpoint that the traditional media filters were flawed, and the Internet offers opportunity for everyday experts and untapped talent. He’s not alone in his assessment. Philip Tetlock created the Good Judgment Project on the premise of nonprofessionals making more accurate predictions than established experts. Tournament style, the project identifies the top two percent of “superforecasters” who don’t have any particular credentials but are amateurs with a knack for making predictions. Through Web 2.0, these individuals are now able to connect and share ideas in a way that was inconceivable just twenty years ago.
Interestingly, most of the articles that I saw about everyone being an expert through the leveling of the Internet were from about five to ten years ago. After that, it stopped being news. Now, it seems that the voice given to the masses is assumed and taken for granted. The last decade has softened it from a potential catastrophe to now just an accepted part of culture.
The twist is that the Internet is both still reliant on traditional gatekeepers and developing new types of filters. As we’ve discussed earlier in this course, the more content is created, the more significant it becomes to navigate and find the right content. Jonathan Zittrain discusses how Google and other search engines have become a de facto filter as people attempt to find material online. Zittrain talks about the tension between “neutral” search algorithms and Google’s moral responsibility to present quality, or at least accurate, sources. His talk acknowledges that most people have a knee-jerk reaction against search engines serving as a “Big Brother” and controlling what you see, but also don’t like the specific examples of overtly wrong or biased sites being at the top of search results. Even though anyone can contribute online, search engines and other tools for navigating the web still provide some basic form of filtering. The questions is how much power should we give them?
Even in light of the massive amount of user-generated content and the new ways of determining what has value, there is still a role for traditional gatekeepers to help audiences from being bombarded. This is good news for Keen who sees “professional intermediaries [as] arbiters of good taste and judgement.” For me, the example that comes to mind is Wikileaks. On one hand, it embodies the ultimate democratization of all information being released to the public online. On the other hand, nobody reads the thousands and thousands of released leaks, and the general public hears about only the top few items of interest as reported by major media outlets. The gatekeepers are still serving to prioritize the information and tell people what they care about.
Wikileaks releases unprecedented amounts of information online, but still relies on traditional filters to make sense of it. The Guardian: https://www.theguardian.com/news/datablog/2010/nov/29/wikileaks-cables-data
The New York Times just ran the article “WikiLeaks Isn’t Whistleblowing” that offers a scathing condemnation of the Wikileaks approach to “journalism” and argues that massive data dumps are inappropriate and counterproductive by not offering context for the information or discerning what is necessary to share. Tufecki writes, “Mass data releases, like the Podesta emails, conflate things that the public has a right to know with things we have no business knowing, with a lot of material in the middle about things we may be curious about and may be of some historical interest, but should not be released in this manner.”
Putting aside the other moral and privacy questions raised by Wikileaks, it serves as an extreme example of how the Internet enables a massive amount of content from all types of sources, while we’re still figuring out the role for filtering and gatekeeping. Keen warns that if we don’t find an answer, we’ll soon see the worst of ourselves reflected back in the Internet and discover our true cockroach nature.
Tufecki, Z. (4 Nov. 2016). Wikileaks isn’t whistleblowing. The New York Times. Retrieved from http://www.nytimes.com/2016/11/05/opinion/what-were-missing-while-we-obsess-over-john-podestas-email.html
Good Judgment. Accessed 5 Nov. 2016 https://www.gjopen.com/
Working together can create more meaning and bring more understanding of the world around us. The ideas in Chapter 4 of Net Smart by Rhiengold (2012) especially regarding collective intelligence and the function of the Internet to create communities, groups, and audiences that create a deeper meaning of what is happening around them is very powerful and applicable to our work with analyzing and reviewing social media principles as well as our work as technical communicators.
I have heard complaints from the generation before mine, professors, staff members, and students that came before, that the way we learn and take in information currently does not take the same amount of effort and time that it used to, thus we are as a whole not as smart as we could be, as they had to be in the world before the World Wide Web.
I wholeheartedly disagree. Are things different? Definitely. For the most part, we do not have to deal with card catalogs and worrying about not obtaining the library book we need because someone already has it out. But what we do have is mountains of information at our fingertips that needs to be read through, researched, analyzed, and ultimately accepted or discarded as useful to the project that need to be completed.
Thinking about it as the natural reaction our society has had to the advent of technology and connectedness, collective intelligence seems like a great place for us to be in.
“Now that we have gained access to digital tools that enable us to share what we know and aggregate small contributions into large knowledge repositories, a new level of collective intelligence is possible” (p. 160).
Just as a reality, it is fascinating how much I find myself depending on the opinions and knowledge of others in my personal and professional life.
I read Yelp reviews and will search through a few pages for tips and tricks about shopping: how to do it effectively, where to go for the best prices, and when to go to avoid the most foot traffic.
I use my coworkers as sounding boards when working on projects, running edits, changes, style issues, and new copy by one or more people to see how they react, even when we’re working on completely different projects.
This trend is so important to the way we think about knowledge and learning. It may seem like an obvious idea. We learn currently from teachers and professors, those who go to school and study techniques specifically to learn how to instruct and impart knowledge on others, but to my mind there is still so much stigma associated with the spirit of collective intelligence in schoolwork.
Beginning your career as a student, you do not learn that it is your right, I would say responsibility, to question the font of knowledge: a teacher. In order to retain control over groups of wild children, teachers must be seen as the ultimate authority in their spaces. As you grow older and become more comfortable with yourself and the idea that you have to have your own opinions and thoughts about the world around you, you are inundated with cultural norms and taboos. They are subjects you can’t bring up in public without receiving a negative reaction: sex, politics, and religion. There are other subjects that only apply to you and place you into a subgroup: race, gender, sex, socio-economic status, ethnicity.
By high school you have hopefully learned all the rules, overtly taught to you and covertly gathered by osmosis and have gone through puberty so hopefully you have become a version of yourself that can function in society. You have created PowerPoints and book reports and scientific models. But beyond being forced into groups by your teachers, it is still up to the teacher as the superior figure to create meaning and focus your attention on the facts and figures that you need to know.
That long analogy is meant to draw attention to the fact that with the Internet and social media, it is up to us to create meaning and monitor the information and knowledge being influenced and cultivated around us. I cannot say with complete certainty that children are reacting differently in classes. There are thousands of studies and reports about classroom teaching and management that are authored about the changes going on in classrooms because of technology and the Internet.
What works for me is the idea that we are demanding more of our teaching professionals and of ourselves than we have before. Yes, the Internet gives everyone a platform to shout their opinions from the rooftop (leading to a degradation of fields like traditional print media). It also gives us the ability to share what we know with each other, outside of the limits of a roundtables and desks with tiny chairs. Even outside the bounds of an online course taught by a PhD.
Rheingold, Howard. (2014). Net Smart: How to thrive online. MIT Press. Cambridge, Massachusetts.
In Net Smart, Howard Rheingold recognizes the same trend as Sherry Turkle of the historically unprecedented amount of available information through the Internet. However, Rheingold confronts the challenge of the volume and velocity of digital media with much more optimism. He sees it as a huge opportunity, if people understand the right strategies for managing it.
In his Tedx Talk “Attention: The New Currency,” Sree Sreenivasan argues that getting and keeping attention is critical for success in this world of overwhelming volume. Sreenivasan says, “It isn’t just that our attention spans are getting smaller and shorter but that there’s so much more stuff coming at us and so much more stuff competing for our attention.”
Rheingold makes the case that one way to handle the volume is increased mindfulness about what is getting our attention. He argues that the issue isn’t that multitasking is rewiring our brains, but rather that we do it without even being aware of it. The Washington Post article “Is the Internet Giving Us All ADHD?” suggests that although rates of ADHD are steadily increasing and the Internet facilitates behavior often recognized as ADHD, there is no evidence for a causal link. As the volume of information on the Internet continues to explode, we don’t need to fear possible brain damage, but rather be mindful about where we are putting our attention. Sreenivasan quotes Les Hinston, former publisher of the Wall Street Journal, as saying, “The scarcest resource of the 21st century is human attention.”
However, simply knowing where our attention is going is only the first step in managing information overload. In Chapter 2, Rheingold suggests a dashboard approach to “infotention.” Savvy users organize and manage content in a dashboard style so that they can easily access the most relevant and useful information. When you’ve decided how you want to prioritize your attention, the dashboard approach helps you organize the information that you’ve decided is worth your time.
A third strategy is relying on others as curators. Rheingold tells several cautionary tales about bogus websites and warns about the need for “crap detection.” However, being a “detective” and investigating the source for every website that you visit just makes the volume even more overwhelming. In my experience, leisure users rarely go through the trouble to research a site’s author and dig for source material. Instead, most users have the online news site that they always read, and they trust it — no further investigation necessary. I haven’t been able to find a comprehensive study, but I’m curious about the percentage of time that people spend online on just a handful of favorite sites. I’m guessing that for most people, the majority of their time online is on just a couple of sites that they have deemed as passing the crap detection test.
Beyond curating your own list of favorite sites, people turn to social curation. Just as Google uses the PageRank algorithm (Rheingold, pg. 83) to boost search results based on links from other sources, so we turn to the wisdom of the crowd to help us determine which information in the sea of possibilities should get our attention. I saw this article “Social Curation in Audience Communities” about how a Finnish newspaper deemed the participation of their readers in”liking” and sharing articles as one of the most critical factors to their success and how they used strategies to begin leveraging this social curation. The article includes the statistic that up to 75% of the online news consumed by American audiences is forwarded through email or social networking sites. You could argue that this is because of peer pressure, the desire to read what our friends are reading, or other social motivators, but I think it’s also a coping mechanism to handle the volume of information available. When there are too many options, one way to decide is to take the recommendation of others. I think it’s the same as asking your dinner date what you’re at a new restaurant and trying to pick from a huge menu.
Finally, Rheingold pushes us to go one step further: “Google itself is not the curator; we are. Every time a person references a link, they help to curate the Web.” (pg. 127). After we’ve waded through the huge amount of information and deemed what is reliable and attention-worthy, we can participate by becoming the curators. Theses 72 in the Cluetrain Mainfesto gets at this: “We like this new marketplace much better. In fact, we are creating it.” As a community of curators, we’re no longer just consumers of corporate rhetoric, but we are empowered to determine value for ourselves.
Three sails to staying afloat in information overload. Drawing from Coloring Son
Actually, Rheingold’s principles for being a “filter blogger” bear a surprising resemblance to what we do as technical writers. We take on a huge amount of information and distill it for what is important. Although technical writing then moves to the next step of content creation, it begins with managing and curating available information. We daily practice the skills of culling information and can appreciate the wealth of opportunities offered by the Internet without being swept away.
Dewey, C. (2015, March 25). Is the Internet giving us all ADHD?. Washington Post. Retrieved from https://www.washingtonpost.com/news/the-intersect/wp/2015/03/25/is-the-internet-giving-us-all-adhd/
Sreevnivasan, S. (2015, April 20). Attention: The new currency.” Tedx Broadway. https://www.youtube.com/watch?v=8I4WkhG_GRM
Villi, M. (2012). Social curation in audience communities: UDC (user-distributed content) in the networked media ecosystem. Journal of Audience and Reception Studies. 9.2. Retrieved from http://www.participations.org/Volume%209/Issue%202/33%20Villi.pdf
Dave Clark’s (2010) “Shaped and Shaping Tools: The Rhetorical Nature of Technical Communication Technologies” article is reminiscent of my Rhetorical Theory class as he examines the newest micro-blogging site, Twitter and rhetoric of technology. This is most interesting because I was working with an online media/SEO company when Twitter exploded online. Are there similar studies on the rhetoric of technology with other social media sites such as Facebook, LinkedIn, or Pinterest? And how have these social media sites influenced digital rhetoric, genre and activity theories for technical communication? What is the importance of learning about new technology, Clark (2010) asks.
Learning and assessing new technology
How do we learn about new technology? This was one of the first questions asked in English 745 and we were asked to identify ourselves as early adopters, medium adopters or late adopters. Where did you put yourself in this range? Clark (2010) asks the reader “what it might mean to be a literate user of Twitter (or any other type technology)” (p. 86). What do professionals expect technical communicators to know about technology? How can we transfer and apply this knowledge in the appropriate environment?
To understand technology, Clark (2010) says we must also understand the rhetoric and analyze the research. Clark (2010) categorizes his approach to explain the “rhetoric of technology into four groups: rhetorical analysis, technology transfer, genre theory, and activity theory” (p. 92). I’ll examine the first two groups below.
Rhetorical analysis of technology is relatively new and should not be compared to rhetoric of science since it has its own foundations. However, it’s a good place to start. Clark (2010) cites Robert Johnson’s premise that
“as a field we must argue for a rhetorical approach to technological design and implementation that places users, rather than systems, at the center of our focus, and that we have ethical and cultural responsibility to learn and argue for collaborative approaches to technology design” (p.93).
There’s more than using technology like Twitter (or Facebook, etc.), we must also analyze the design and ethical responsibilities of its use. (Johnson’s book, User-Centered Technology: A Rhetorical Theory for Computers and Other Mundane Artifacts (1998) can be a difficult read, but insightful how technology is not always user-centered.) This is difficult to digest at first – understanding technology design for rhetoric and ethical practices for the user. However, if we understand that technology is constantly changing and improving then we can become more cognizant of new technology design and its effects on the user.
The second category Clark (2010) discusses is “technology transfer,” the movement of an engineer’s idea from desk to putting it into public use. Notably of importance to technical communicators, Clark (2010), states they are “constantly expected to design, evaluate, document, and implement new technologies” (p. 94).
This is the answer to Clark’s (2010) primary question. Before we can design and implement new technology, we must be able to understand previous technology, document the success and pitfalls and evaluate to improve it. However, technology transfer must also be “negotiated, constructed, and reconstructed in the minds of the participants” according to Doheny-Farina in Clark’s (2010) research (p. 95). I’m still digesting this concept. I remember when Twitter was new and users were experimenting with all the features and everyone was tweeting anything that came to mind, hence, no filters were on. Then in 2010, Twitter announces that it will supply an archive of tweets to the Library of Congress (About.Twitter.com). Yikes!! Filters applied. What can technical communicators infer and learn from this rhetoric of technology?
The discussion on genre and activity theories is very interesting and I would like to write about both of them in a separate post. Overall, the rhetoric of technology needs further examination and discussion to understand its implications, our responsibilities, and other theories.
At my company, customers access much of our documentation by searching a central repository. Far and away, the most frequent feedback that we receive about our documentation is “I can’t find what I’m looking for.” So I was very interested in “Informational Design: From Authoring Text to Architecting Virtual Space” (Salvo and Rosinski) and their discussion of the necessity of search and retrieval and of designing our documentation for better navigation.
Salvo and Rosinski talk about envisioning documentation spatially to help users’ navigate and find their destination. They give the example of knowing user context when searching for “broccoli” in order to return the best results. There is no question that findability is hugely important in how customers locate and use our documentation, and search engine optimization (SEO) has become a big business. It doesn’t matter what we write if the right audience can’t find it at the right time.
Interestingly, I saw this user-context-based search engine patent filed by Google in 2006 (published in 2013). They discuss the known limitations of search engines and their invention to return search results by categorizing the information based on external context clues. The example that they give is figuring out that a given web site is an encyclopedia based on the surrounding words, and then using information about the user to determine whether they are looking for an encyclopedia.
I think having more context-aware searches would be a boon to technical communication and continue to accelerate our path from content creators to content managers, who look beyond the sentence level to strategic documentation processes.
The second piece of findability is not just locating the right document, but then navigating within it. The Wired article “Findability Will Make or Break Your Online Business” talks about both halves in the context of marketing your business, but I think the same is true for helping readers through technical documentation. The tips on providing user-relevant content and appropriate links (as well as the astounding statistic that 30% of visitors use site search) are certainly relevant to how we create and envision documentation.
Salvo and Rosinksi make a closely related point about using genre conventions and creating a document environment that orients the audience and primes them for a response. By using signposts and making it clear what kind of document they are reading, we can set expectations so the audience knows what to look for and how to respond.
The diagram below actually comes from a SEO company, but the accompanying article “Are You Marketing to Search Engines or to People?” makes a surprisingly counter-serving claim that the best strategy to getting read online isn’t just tricking search engines but creating high-quality content. Documentation that is designed for the audience and understands their needs is more effective in boosting overall findability of both the website itself and particular information within it.
In “Shaped and Shaping Tools,” Dave Clark also addresses genre theory and how we can create standards and templates that help users know what to find. Although perhaps not as obvious as a wedding invitation, what are other ways that we can be using signposts and ambience tools to define the genre of each document and subconsciously cue the audience on what to look for and where to find it?
Salvo and Rosinski quote Johnson-Eilola as saying “the map has started to replace the story as our fundamental way of knowing.” In light of human history, that seems a shocking thing to say, but I do see it being borne out, at least to some degree, as the amount of information grows exponentially and the challenge of navigating it becomes more important. I still fancy myself as a writer about a cartographer, but managing documentation for findability is an increasingly key part of the role.
“Are You Marketing to Search Engines or to People?” KER Communications. 29 June 2010. Accessed 30 Sept 2016. https://kercommunications.com/seo/marketing-search-engines-people/
Hendron, Michael. “Findability Will Make or Break Your Online Business.” Wired. Accessed 30 Sept 2016. https://www.wired.com/insights/2014/02/findability-will-make-break-online-business/
I was fascinated by the history of technical communications and the progress of technical communicators from Rachel Spilka’s (2010) Digital Literacy for Technical Communication: 21st Century Theory and Practice. Working as a technical writer with a large oil and gas corporation, I identified with several of the changes in the technical communication field from having knowledge of writing to understanding digital literacy. I was surprised that technical communicators will likely experience “reengineering” or periods of work and non-work during their careers. The future of technical communication jobs is uncertain; however, technical communicators need to assert certain digital skills and prove their value to the company/industry to maintain employment.
I have experienced many changes of roles and responsibilities with technology and writing throughout the past several years. As JoAnn Hackos explains, “the roles and responsibilities of technical communicators are changing rapidly – in some cases for the worse” (Spilka, 2010, p. ix). As technology evolves and changes, people have to learn, adapt and apply new technology to advance their expertise. Spilka (2010) states that in Part III of Digital Literacy technical communicators need to explore the answers to past theories or develop new ones to better understand how technology has transformed our work (p.14). I have not considered past technology and methods for communicating has an effect on future ones.
I haven’t been in my current position just over three years and I have experienced a dramatic change in our standard writing procedure and content management system (CMS). We started with MS Word generated documents, received hand written signature approvals, and used a file transfer protocol (FTP) to upload them to an archaic CMS system. This process (writing and receiving approvals) often took months or even years to complete and was not efficient or effective for those who needed to follow the standards every day. Two years ago we underwent a complete overhaul of our process and CMS system. Most parts of the process are auto-generated with email reminders and a CMS that uses HTML and XML files for creating standards that are compatible with multiple platforms. No more written signatures or filing papers in file folders since most of the workflow process is completed within 60 days or less. Although the system has several drawbacks and oftentimes has “bugs” that hinder our process, we’re still better than before. Management is researching the next system since technology becomes outdated as soon as it becomes popular.
We’re in the Web 2.0 era, but will digital literacy, advancing globalization, and technical communication survive the “seismic shift” that will likely lead to Web 3.0 in the near future? R. Stanley Dicks (Spilka, 2010) examines the drastic changes technical communication has been experiencing the last couple decades and it doesn’t appear to moving backward either. These dramatic changes will test our skills and value in the workplace. Dicks says to remain a valuable contributor, we’ll have to add a “strategic value” to increase company profits which comprises of having leadership skills, training and education as well as being more than a writer and editor. Technical communicators will have to be “symbolic-analytical” workers.(Check out this SlideShare about Johnson-Eilola’s research.) I’m still trying to visualize this concept, but I understand that we’ll have to know and do more than just write words. We’ll have to be the researcher, theorist, rhetorician, translator, and collaborator to prove our valuable skill sets to remain employed.
I have really enjoyed this class, and interacting with all of you on this blog. This course has helped me see my current (and future) workplace situation through different lenses, and I feel this has made me stronger professionally. I chose to write my paper on what skills technical communication professionals need to succeed in the modern/future workplace. I have pasted my abstract below, please let me know what you think!
Emerging media has completely changed the face of traditional technical writing. The introduction of Web 2.0 has created user needs that supersede the tangible printed and bound instruction manuals that previously defined the field. As a result, workplaces have established new requirements for the skills ideal technical writing candidates must possess, and universities have strategically designed programs to keep up with these trends. Successful technical writers are now faced with the tasks of interpreting the most effective structure to present information; the best terminology for particular users; the appropriate design strategies to maximize accessibility; and the optimal platforms/technology to deliver products. This paper will define modern technical communication, and highlight the essential skills and abilities required for success in the industry. This paper will be concluded with my personal experience with these dynamics as a technical communications professional in multiple workplace settings.
The skills I then listed are to:
- Understand business operations and corporate financial goals to prove their value to the workplace
- Possess the collaboration skills, and ability to work in a team environment
- Maintain a thorough familiarity with leading industry tools and trends
- Possess solid writing, composition skills, and oral communication skills
- Possess the ability to evaluate their own work performance as well as those of others
- Possess document design knowledge
- Possess the ability to execute tasks and projects with enthusiasm and to meet deadlines with little support from management