I usually am quite careful when it comes to my phone: use phone case, apply the screen protector, things like that. But I suppose accident happens regardless. So, during the first week of August, I accidentally dropped a big screwdriver on the phone (don’t ask why) and heard a “crack” sound. Uugghh… my heart dropped when I saw the crack. Really bad.
Hoping the screen protector was strong enough to protect the touchscreen (after all, I used tempered glass screen protector), I turned it on and, bummer, the touch screen is completely borked. Fortunately, the hard drive was not affected so software worked fine. However, I could not interact with the apps, even when I tried to shutdown the phone. So, the only thing I could do was to let the phone run until it was running out of the battery and shutdown by default.
I checked the company’s website and their user forum, and found out one could send the phone back to the company in China and get charged for $150 (apparently this kind of physical damage doesn’t get covered by the warranty) or spend about $50 for the screen/touch display and replace it oneself. Being a tinkerer I am and always want to see the guts of any electronic devices, I decided to risk it and do the screen replacement myself. The downside: opening up the phone means I will void the warranty. But, at this point, warranty means little to me if I have to spend big bucks anyway to have the phone fixed. Besides, I am going to learn something new here. Worst case scenario: I failed. But then I can always sell the phone as parts on eBay. So, nothing really to loose here. Besides, I still have my Moto X phone as a backup phone.
YouTube provides various instructions on DIY phone screen replacement. I found two videos that really helped me to understand the ins and outs of replacing the screen.
The first video below nicely showed how to remove the damaged screen and put the replacement back. He showed which areas we need to pay attention to so we won’t damage the component.
The second video was created by a professional technician, so his method is very structured. The tools he used helped me to figure out the tools I need.
I basically watched those two videos probably a dozen times or so to make sure I didn’t miss anything (and, yes, I donated to their Paypal account as my thanks.)
It took me a while to finally finished the screen replacement work. I removed the cracked screen first, and then had to wait for about 3 weeks to receive the screen replacement. I just used whatever online store they recommended to get the parts that I need.
Below is a set of thumbnails with captions explaining my work. Each thumbnail is clickable to its original image.
It was scary the first time I worked on the phone, mostly because I don’t want to break things. But I eventually felt comfortable dealing with the components and, should similar thing happened again (knocks on the wood it won’t), I at least know what to do now.
Sometime ago I read about going out and learning about your own surroundings. Sorry, I’m completely blank on the actual resource and whether I read from one of those motivational emails or tweets or websites or image meme. The point is, we should not stay inside our own bubble.
How much do we actually know the kind of awesome services or initiatives available in our own library or within other units on campus? I only know a little, to be honest. Many times I found out a cool set of collection in the library because somebody mentioned it, a local newspaper wrote about it, or from the newsletter sent to the library supporters. Kinda embarrassing, but, hey, better late than never. Same thing with many initiatives happening around campus. With so many units established on campus, I am sure I miss many of them. But I would like to highlight several of them:
First, MSU Libraries is gathering text and data aimed for digital humanities (DH) projects either through our own digital collection or collaborate with vendors. It’s all started with a request from a research faculty wanting to work on a topic that would require Congressional data. This collaboration with the faculty prompted our Digital Humanities librarians to pursue other text or data collections that we could offer to our users (and, in some cases, to the public).
Another one that I’d like to highlight is Enviro-weather, a weather-based tools for Michigan Agriculture’s pest, natural resources, and production management decisions. This is a collaborative project between the Michigan Climatological Resources Program and the MSU Integrated Pest Management Program. Each yellow dot on the map represents an Agriculture Station. If you highlight the dot with your cursor, you’ll see the latest weather data pulled from the weather station positioned around the state. Click on the dot and you’ll see a more complete information on the area. You could, of course, go further and get the raw data itself by going to their Enviro-Weather Automated Weather Station Network site.
The Geographic Information System (GIS) unit on campus created cool and useful GIS-based applications that they developed to showcase the MSU campus. My favorite applications are these two below:
The Historical Imagery provides aerial photography of the MSU campus from 1938 to 2010 (I hope they’d add more for the later years.) While interacting with application, I, of course, couldn’t resist checking the area where the current MSU Libraries is located. By moving the slider slowly, I could see the changes happened from an empty slot to its current structure. Not all images are available; sometimes you get an empty section due to image unavailability. Still, it’s really cool to see the changes happened during the last 60 years or so.
The Environmental Stewardship (requires Adobe Flash Player 11 or higher, unfortunately) allows one to check the energy consumption and/or waste reduction effort around campus. You can pick a building and generate the report based on the data for current or past fiscal year. One can see that they made the information available for the public to see and download due to MSU’s status as a public and land grant university; the application allows the public to inspect and interact with the information themselves.
There are more great projects and initiatives around campus like the ones that I highlighted above. It would be nice if I could do a “cool stuff on campus” search on the university website instead of relying on the serendipity. But, hey, I probably should go around and ask instead. :-)
In my earlier post “Digital Collections and Accessibility”, I touched upon the considerations we would need to address when building or creating digital collections (or other things that rely heavily on utilizing images such as data visualization) for public use. Here are the questions I put down in that post:
“Given the ubiquitous nature of digital collections, the goal that these collections would be used as part of scholarly activities, and the library’s mission to disseminate the information as widely as possible, there is one aspect that many of us need to address when we plan for a digitization project: how do people with disabilities access these collections without getting lost? Can they also get the same access and benefit from our collections if they only rely on their screen readers (or refreshable Braille, or any other assistive technology)? Can people move around our website easily using just a keyboard (for those with hand-coordination difficulty who cannot use a mouse)?”
So: planning. Planning is an important part when incorporating accessibility into building a collection. Typically, building a digital collection starts with designing the metadata (PDF) and then proceeds to further development activities such as database design, content creation, data entry, and coding/front end development. Whichever process that we develop, we would like to see that the website is well designed and the information presented is useful for our audience (I am assuming that most digital collections created and made available are designed for web access, with an added bonus if they also employ a responsive design.)
If you visit digital collections developed by various institutions, you’ll see that they present their collections differently. Many would display the collections be like a catalog that shows an image, the physical description, and related information such as the owner, creator, and copyright statement at the minimum.) Some also include an interpretation of the object (think the label of an object or painting displayed in a museum.)
Regardless how the object is presented (by description or interpretation), accessibility considerations are still the same. The most common considerations: the web page needs to be properly structured by using proper headings; the flow of information presented on the page needa to make sense for screen reader users or keyboard-only users; search forms need to be properly labeled; images need to have alternative text (usually referred to as “alt-text”.) This is when the planning for the page design and coding becomes important.
Consider this page:
and consider how the flow of information would be read by a screen reader and how a screen reader user might hear it:
Typical screen readers read the information displayed as if the CSS is disabled; they read web content in the order that it appears in the code.
(Bonus: if you have not seen or heard how screen reader users interact with a website, you can view the recording of accessibility test of our e-resources page (.mp4) done by my blind student. We did this as part of our accessibility test routines for the library electronic resources.)
Both images above should be sufficient to give us ideas of how a sighted user might interact with the page and how a screen reader users might hear it. Our eyes can focus on and narrow down to a certain section faster while screen reader users need to listen to the whole thing first before they can work on distinguishing the part that provides the actual information of the object being displayed. Hence, careful planning when designing the metadata and the page is needed to make sure our collection is both useful and usable for our audience regardless how they access it.
A lot of data visualization rely on colored graphics when conveying the information. It is trickier to tackle because of the colors used and, unlike most images used in digital collections, data visualization conveys very rich information.
Consider this example with three different color representations:
By looking at the colors used on the image above, we can see that the information is grouped based on the region (South Asia, East Asia and Pacific, Africa, Europe, etc.) and the color density of each individual block reflects the population density of the area.
The second image shows how the visualization might be seen by those with the red green color blindness (protanopia), one of the most common types of color blindness. Here, East Asian and African regions are no longer distinguishable. Similarly, South American,Russian, and European regions are also no longer distinguishable.
This last image shows how those colors don’t really convey the grouping of the regions to those with total color blindness (achromatopsia, which is a rare condition but still exists.)
The point of these examples: do not use color alone to convey meaning.
As far as I know, there is no practical solution yet for making data visualization fully accessible. Several options that can help increasing the accessibility: supplement the color with text or provide summaries or text description right after the image (alt-text or image caption). If the description is too long to be listed on the same page, create a separate page and link to it. Similar to designing for digital collection, designing for visualization also needs careful planning.
Designing for accessibility for our digital collection or data visualization should be done as part of the planning phase. This would allow us to optimize the output of our work and eliminate or reduce the need to revisit the design for corrections later on. Careful planning on how we want to display the information and to convey the meaning of the graphics/images would benefit all of our users regardless how they access our collections.
Meloncon, Lisa K. Rhetorical Accessability: At the Intersection of Technical Communication and Disability Studies. Amityville, N.Y: Baywood Pub, 2012.
Pullin, Graham. Design Meets Disability. Cambridge, Mass: MIT Press, 2009.
Last month, Google announced the new no-captcha reCAPTCHA that is supposedly more accurate and better at preventing spams. We’ll see how this goes.
In the mean time, plenty of websites that employ Google’s reCAPTCHA still use the old version like this:
The problem with this reCAPTCHA is that it fundamentally doesn’t work with screen readers (among other things, like forcing you crossed your eyes trying to figure out each character in the string.) Some people pointed out that reCAPTCHA offers the sound version (see that little red speaker?) that should mitigate the problem.
For something to exist, it has to have a position in time and space.
And this explains why nine-tenths of the mass of the universe is unaccounted for.
Nine-tenths of the universe is the knowledge of the position and direction of everything in the other tenth. Every atom has its biography, every star its file, every chemical exchange its equivalent of the inspector with a clipboard. It is unaccounted for because it is doing the accounting for the rest of it, and you cannot see the back of your own head.*
Nine-tenths of the universe, in fact, is the paperwork.
Like many other academic libraries, our collection consists of not only print materials, but also electronic collections. Typical electronic resources can be those we subscribe to through a vendor (ProQuest, JSTOR, Elsevier, etc.), or ones that we produce in-house such as https://www.lib.msu.edu/exhibits/).
The typical outcome from these digitization projects are images, metadata, and text, represented either as an image of printed or handwritten material or as a transcript. We then create a Web presence for these outcomes, including features like search, browse, and perhaps some additional application to display and interact with the images. User interaction with these digital collections should be straightforward: users should be able to visit the site, search or browse, and read the information presented on the page with ease. We also want to make the presentation of these collections pleasing to the eye, with background color or images, font type and color, and consistent placement of the images with the associated metadata (image on the top with metadata on the bottom, or image on the left with metadata on the right, or the whatever design decision we make to present the collection.) We also want to make sure that our institution’s branding is visible. So we add the banner, image or logo of our institution, some navigation so visitors can also go to our main website, and footers to provide visitors with contact information, acknowledgement of the funder, link to the privacy statement, etc.
Eventually, we produce a set of rich interfaces, chock full of images, text, and links. And probably some audio, too, for a sound project.
Given the ubiquitous nature of digital collections, the goal that these collections would be used as part of scholarly activities, and the library’s mission to disseminate the information as widely as possible, there is one aspect that many of us need to address when we plan for a digitization project: how do people with disabilities access these collections without getting lost? Can they also get the same access and benefit of our collections if they only rely on their screen readers (or refreshable Braille, or any other assistive technology)? Can people move around our website easily using just a keyboard (for those with hand-coordination difficulty who cannot use a mouse)?
Consider these questions when you begin working on any digital humanities project. Data visualization is now being used a lot. Sighted users can review the image representations easily; we can distinguish the information by shape and colors. Mundane data that used to be presented as text can now have pretty face. Information can be conveyed faster because we can see the charts and colors right away without having to go through lengthy text. But how can those who rely on sound be able to infer the information from those charts? Can color-blind people distinguish the color palette that you use? How are you going to explain the conclusion of your charts “verbally”? These are areas that have yet be addressed fully. We still have a lot of work to do.