Author Archives: peahleah
Content strategy is a buzzword that people have been using the past few years, but what does it mean and why should organizations care? We can all agree that Web 2.0 technology and applications have changed how people use content. We can also agree that if content is not useful and easy to find, customers and users will move on. My paper considers how a technical communicator can transform content into a business asset by responding to the following questions:
- What is a content strategy? What is it not?
- How do you develop a content strategy?
- What is a content audit?
- How do you implement a content strategy?
Once the above questions are answered, my paper concludes with my own case study in understanding what is involved in a content strategy and some of the challenges faced when I converted my company’s FrameMaker files into DITA.
Prior to this class, I had never blogged. I kind of like it. I also learned that I hold my breath when I check my work email. 🙂
Audience analysis is something that I’ve always struggled with in my career. As a technical communicator who has spent more than seven years documenting various software products, I often wonder why it is so difficult to understand the users of a particular product or why it is impossible to have contact with them. Since documentation is so important, why does all customer contact and audience analysis come from product management, marketing, or support? If we are providing information to customers, shouldn’t we as technical communicators be the first line of contact? I understand that the main reason is to respect customers’ privacy and time, but that just seems like an excuse.
Similar to cases three and four in Addressing Audiences in a Digital Age, my company also provides enterprise network security services and products. We produce 500+ page PDFs and HTML help. We want to improve our documentation, but we don’t truly know our reader’s needs. Like most linear-based PDFs, our content is not chunked and some of the important tasks are buried in paragraphs. We are also interested in providing tutorials, but since we have absolutely no contact with our customers, we don’t know if creating these tutorials would be valuable.
Blakeslee explains that there are three things writers need about audiences:
- How readers will read and interact
- What context will readers use the information
- What expectations do the readers have before using the information
The chapter then gives detailed examples in the case studies of the strategies and methods writers use to analyze their audience. Some use bulletin boards, personas, and support call logs. Others use industry conference proceedings, whitepapers, or training materials. At my company, we get some feature request information from product management. We also receive software bugs that are logged if customers or employees find issues in our documentation. While our current methods aren’t the best, I feel encouraged to apply some of the questions listed in Appendix A to improve our documentation and to provide the best user experience possible.
While reading chapter 2, “Crap Detection 101: How to Find What You Need to Know, and How to Decide If It’s True,” of Net Smart, I was waiting with bated breath for Rheingold to bring up the controversial subject that has caused great debate, disagreements, and “unfriending” in my social media circle in recent years: vaccines and autism in children. But, he didn’t.
As a parent, do I have concerns that autism might be linked to the vaccines my children receive? Absolutely. Do I vaccinate my children? Absolutely. Do I worry that I might be making the wrong choice after each vaccine? Absolutely. (To date, my sons–fifteen and eight–do not have autism).
So, what are we as parents to do? Rheingold recommends to “chase the story rather than just accepting the first evidence you encounter.” To chase the story, the first thing to do is to search for information online. But what words do I search for and which link(s) do I click? Rheingold also states that “when you get the results from a Web search engine and click on a link, you can’t be sure that what you get is accurate or inaccurate information, misinformation, or totally bogus.”
I Googled “vaccines and autism” and then clicked the “Images” link. From here, the search results were already conveniently categorized for me by “chart”, “don’t cause”, and “for children”. The results also showed screaming babies and needles—scary stuff for any parent. Mixed in with these images, were other cartoons and infographics that were pro-vaccine, one even had support from Bill Gates.
How can I tell if any of it is real? Which side of this controversial debate do I take? Rheingold suggests to “think skeptically, look for an author, and then see what others say about the author.”
But how is this possible when even doctors, nurses, and government agencies—all have credentials and are highly regarded as experts—can’t even agree?
Rheingold also states that “digital media and information abundance may complicate people’s confidence in and knowledge of who is in authority” and that the “social aspects of critical evaluation can be powerfully useful, but they also can be misleading.”
Just because a link displays at the top of a search engine, it doesn’t necessarily mean that it is the best source of information. Nor does seeing disturbing photos of needles sticking into babies convince me that vaccines are harmful.
To complicate things even further, Rheingold states that when searching online, we “write the answer you want to get when formulating your search query.” So if I enter “vaccines cause autism”, I will probably get rhetoric on how vaccines are bad; and if I enter “vaccines do not cause autism”, I will get information on how the two are not related. This is also referred to as the “echo chamber effect.” We are all guilty of focusing our attention to only things that align or reinforce our own beliefs or behaviors. Is this why AutismOne has 14,000 Twitter followers?
Or why there are now children’s books that urge children to get vaccinated against Measles? Would a parent who refuses to give their child vaccines allow that child to read a bedtime story on the importance of being vaccinated? Probably not.
With this abundance (overload) of information, this is where my “well-tuned internal crap detector comes in handy.” However, he then cautions that “people who bet their health on online medical information […] the stakes in this detective game are high.” To get my answer on vaccines and autism, I could triangulate–check an author’s name, enter the URL of a site into a productivity index or hoax site, and type “criticism” or “background” in a search–to get at least three things that indicate whether an online link is credible.
Yet, this is not enough as Rheingold claims “well-intentioned yet dangerously misinformed people, quacks who sincerely believe that their ineffective cures will save the world […] abound online. It’s not just that uninformed consumers of bad medical information can harm themselves; people who link and forward without checking closely are part of the problem. When it comes to medical information […] believing or forwarding bad info can be unhealthy or fatal.”
If you believe some of the stories online, there are large portions of elementary schools with unvaccinated children in California. Other stories cite celebrity Jenny McCarthy as a dangerous advocate of anti-vaccines. There are blogs written by people who grew up without vaccines but are now reformed and many social media pages and groups that are anti-vaccine that it becomes difficult to figure out which information is useful or accurate. Did you know that World Anti-Vaccination Day is November 11? Neither did I.
I’m not sure when the controversial debate that autism might be linked to the vaccines children receive will be settled. Will it take a scientific breakthrough? Will it be when previously eradicated diseases reemerge? At this time, it seems that the only thing to do is to keep asking questions and to think like a detective to try to determine the credibility of online information so that you can make the best choice for your family. James Madison summarized it best when he put it, “knowledge will forever govern ignorance: And a people who mean to be their own Governors, must arm themselves with the power which knowledge gives.”
William Hart-Davidson defines a content management system (CMS) as a “set of practices for handling information, including how it is created, stored, retrieved, formatted, and styled for delivery” (pg. 130). Basically, a CMS sits on top of your content and assists with the following functions:
- Topic management: searchable, reusable content
- Single-source publishing
- Translation/localization workflow
- Collaborative development and version control
- Central output format management
Furthermore, Davidson claims that a best practice of content management includes the
“Need to separate content from presentation (pg. 130).”
But just how difficult is it to separate information from presentation and design?
In my experience, it is very difficult. While it is relatively easy to use the same chunks of content (e.g., single XML files) in multiple output formats, it is not easy to customize the design, format, and style of an information product. Let me explain.
We are currently implementing SDL LiveContent as our CMS. It is very expensive, and due to budget restrictions, my manager went with the basic, out-of-box implementation. In addition, we are required to provide two types of output—PDF and HTML—for every major software release. To create PDF output, we must develop stylesheets to transform our XML to XSL-FO. XSL defines the presentation of XML objects and properties that specify the page format, page size, font size, and paragraph/table/heading/list styles. However, since we went with the basic SDL LiveContent implementation, the difficult, time-consuming task of developing stylesheets for XML to XSL-FO transformation must be done by ourselves. (SDL LiveContent offers services to create the stylesheets, but it is very expensive.)
If we don’t develop stylesheets, we will have little control over the presentation (also referred as “signposting” in chapter 2) of our content. This is unacceptable to my manager, as she expects all of our content to continue to have our professional, company-branded formatting.
If this wasn’t complicated enough, SDL LiveContent recommends a different professional formatting solution from the one that we currently use (and have already spent a lot of time customizing that stylesheet). We all agree that we do not need to have two or three publishing tools to generate a PDF or HTML. We also don’t want to have a complicated, manual workflow process that takes the content from our CMS, generates output (PDF and/or HTML), and then stores it back in the CMS. We don’t have someone on our team who can write scripts to do that and there isn’t a bridge to connect the CMS with our current publishing tool.
Ideally, we want to have our content stored in one repository, and from there, we want to be able to generate output on an ad hoc, as needed basis. We want to click a button—have all the magic happen—and then view the PDF that has a beautiful, professional layout. How we get there is my responsibility over the next few months, but I’m convinced that we will have to ditch our current publishing tool and will have to develop brand new stylesheets.
Digital Literacy for Technical Communication was written specifically for me! Many items described in the first two chapters—recent introduction of Darwin Information Typing Architecture (DITA), structured authoring and reuse, implementation of a content management system (CMS), transition of job and team titles, and participating in agile development methodology—affect me directly.
Job title and team name transitions
Digital technology has personally changed my job, job titles, and team name in less than two years at Hewlett-Packard. In July 2013, I started as a contract technical writer on the Technical Publications (Tech Pubs) team.
Four months later, I was converted to a full-time employee and my job title was replaced: information developer. Around this same time, my manager decided that our team would be called Information Development (Info Dev).
Last May, our division was restructured and our team name changed for a third time; we are now called Content Development and Delivery (Content). Moreover, since I managed the FrameMaker conversion to DITA project, I plan to renegotiate my job title at my annual performance review next month to information architect.
We also work on small teams (based on our product offerings) that incorporate the agile development methodology.
FrameMaker conversion to DITA
This past year, I championed a project—including tracking and documenting the entire process—that converted our FrameMaker product library into DITA.
What is DITA?
In Saul Carliner’s chapter “Computers and Technical Communication in the 21st Century”, he describes DITA as an XML-based architecture that divides content into small, self-contained chunks of information that can be reused into several different communication products (pg. 42).
The highest structure in DITA is a topic: a single XML file. DITA has three main topic types: concept, task, and reference. In her book, Introduction to DITA Second Edition: A Basic User Guide to the Darwin Information Typing Architecture, Including DITA 1.2, JoAnn Hackos defines the three topic types with questions:
- Concept: What is this about?
- Task: How do I?
- Reference: What else? This information may also include APIs, error messages, or command line reference lists.
All of the DITA topics can then be assembled, prioritized, and collected into a DITA map—basically a Table of Contents.
Our FrameMaker conversion to DITA process included the following high-level steps:
- Evaluate and select an XML editor. We looked at MadCap Flare, AuthorIT, XMetaL, and oXygen. After much debate, we selected XMetaL.
- Conduct a content inventory to identify and prioritize which FrameMaker books to convert. In addition to documenting software, we also document hardware, and decided to keep these guides in FrameMaker—it’s static content that does not change very often. We also decided to keep our legacy software releases in FrameMaker and only converted the latest version.
- Clean up the source FrameMaker files as much as possible before the conversion to ensure that just the right amount of information was included within a given Heading. Not all of our existing content was consistently structured to contain one concept, one procedure, or one set of reference information. We determined that the PDF generated from FrameMaker would be our source of record to verify that all content was correctly converted.
- Create and run a Mif2Go script to convert every FrameMaker Heading into its own DITA topic. The script also attempted to accurately transfer every paragraph and character tag in FrameMaker into the respective DITA <element> tag. Our library of approximately 1,000 pages (in PDF) converted into more than 4,000 DITA files (topics).
- Using the PDF generated from the FrameMaker source file, open the DITA map (and then each DITA topic) to verify that all content was properly formatted. This step took a significant amount of time to do as all 4,000 files needed additional clean up and validation.
- Use WebWorks to generate output for a DITA map. We created custom stationery files (specialized CSS) that transfers every DITA <element> into a specific look and feel (i.e., paragraph and character style). We have two types of output: PDF and HTML.
- Implement a content management system (CMS) to store all of our DITA files. We selected SDL, and our team training on how to use it starts tomorrow!