I don’t think I understand the goal. Is it to create/promote/enforce a standard whereby potentially painful or otherwise irritating GIFs (by which I assume we mostly mean animated ones) are always opt-in, as in require a positive click-to-animate, or at least an opt-in setting in browsers, as opposed to the current standard which requires one to seek out a way to deactivate animations within one’s own browser, effectively making it a cumbersome opt-out for those most negatively affected by them? Or have I missed something?
In my install of Firefox 35.0.1, the animated gif loaded in clear and running, then blurred when I moused over it. When I clicked on the blurred gif, it went clear again.
Typically still images on the BBS load in behind the spoiler filter, but I think with the animated gif, blitting to the next frame negates the blur filter until you mouse over and reset the whole thing.
My webdork gut’s telling me this has something to do with the stylesheet being loaded in as part of the header and being applied before the imagae loads in, and not reapplying until after you poke at the page with the mouse.
Fair enough. On Chrome, it stays blurred until I click on it, although hovering the mouse over it makes it slightly less blurred. The blurring smooths it out a bit, which I thought might help with the glitching.
Good to hear. I have a lot of extensions setup to try and prevent tracking and cookies. Ghostery (I know, it’s not very effective), Self-destructing cookies, NoScript (cloudflare and quantserve and such are blocked), adblock plus. And I’ve been in-place upgrading firefox since about 3.5, so maybe it’s time I just nuke-and-pave and start over fresh.
One part of accessibility would be to find a better way natively to address the pain/irritation some people experience from autoplay. That seems to be MarjaE’s main gripe.
But it seems there could be other problems for people with low vision / blind who can’t see the GIF. I know for static images the protocol is to have quality alt-text describing the image for the person who can’t see it, so that screen readers can translate that text-to-speech. However, I’ve no idea whether that works as well for looping GIFs. Nor, when I go hunting for a GIF to use, how to assess whether the GIF I’m picking lacks the accessibility features. Also, when I’ve made GIFs, the tools I’ve used never prompted me for any alt-text or other accessibility feature, which leads me to suspect this is getting overlooked.
GIF, itself, has nothing for accessibility. It’s a (not terribly good; but often good enough) compressed image format.
In the web context, good old ‘alt’ is the go-to " will provide the string specified by ‘alt’ for use by text-based browsers or screenreader software. This has nothing to do with GIF, though, it works exactly the same for any image format the browser supports(and may even work if you specify something invalid as an ‘src’, though I haven’t tried that).
The case of non-blind; but visually sensitive in some way, users would be a trickier one. You would presumably need to know what stimuli are problematic, and filter for those at the GIF renderer layer. I know of no such mechanisms, unless you count the (limited) ‘prevent gifs from looping without user authorization’ features built to keep users from being annoyed, or any manual work done by the creator of the GIF to insure that it is not going to push somebody’s buttons.
Alt, at least on the web, is the way to go(GIF certainly doesn’t, and image formats in general tend not to, provide any internal support for ‘accessibility’ scenarios because they are designed simply to store images, not to cope with all the various ways that somebody might want to not use images or use images with modifications(for photosensitivity, red/green colorblindness, etc.)). HTML, though, is generally supposed to provide for ‘fallback’ or ‘alternative’ display modes for various elements; both because of users with particular needs and because of devices and software with particular limitations.
Alt-text works the same way for any image format; but is not intrinsic to the image; but to the page embedding the image, so GIFs should be covered. If the tool also auto-generates HTML, though, you might have a problem, since the alt text has to be defined there for it to work, and more than a few tools are lazy or deficient on that score.
I’m not sure what you did there. I already have an add-on which disabled animated gifs [unless and until I choose to watch them, or unless and until an update breaks them], but I think these things should be a standard browser feature and animated gifs and autoplay generally should not, and should be considered bad form.
As for the phones: phones hurt, and really hurt. And phones present a lot of accessibility problems, but are often the only way to contact public officials, public services, and to contact public services about accessibility problems…
I think this made sense back when the primary way people viewed images was on webpages, because you could always harangue the email@example.com to get with W3C standards.
Now that image files are so portable (I like to pluck them from image banks and plug them into tweets, bbs, e-mails, etc), I’m questioning the premise. If a significant (and, I’d argue growing) way we use image files is to transplant them from one context to another, wouldn’t it be nice if the format had some inherent accessibility features baked in?
I can understand the temptation of making the features ‘inherent’; but it has architectural perils that make me distinctly uneasy.
Complexity is bad: no format gets easier to implement, easier to implement safely, well implemented on most/all relevant platforms, etc. by being complex. Some degree of complexity is unavoidable, of course, if you actually want the format to be useful; but you want to avoid introducing more than necessary. It makes implementation harder, increases the chances of ambiguities that render independent implementations incompatible, raises the odds of ugly and creative exploits, and so on. It also tends to encourage the growth of partial implementations, which are annoying in general and can be very bad for accessibility: if a format is relatively tractable, and software either does or doesn’t support it, you can rely on whatever features it has being available. If it’s a ghastly, multifaceted, totipotent tumor-growth of a format(looking at you, DICOM… and arguably PDF-as-Adobe-does-it, better ordered subsets like PDF-A are safer), you can’t be sure that ‘supports Format X’ actually implies that a given piece of software supports Specialty Feature Y(indeed, in hostile environments like message boards or chat systems, certain special features may be explicitly stripped to keep people’s rickrolling and shock site redirects to a minimum). If you can’t depend on the feature being supported by a given piece of software, then the value of the feature being ‘inherent’ just isn’t as great.
Logical separation of different classes of features is very helpful: in order to keep down complexity (see #1): In the case of text, unicode defines glyphs and tries as much as possible to stay away from actually defining the appearance of characters, which it leaves to fonts, which define the appearance of characters but avoid interacting with color, line spacing, blinking, size, and other attributes that are left to the document format or markup language. This logical separation makes life easier for applications that only need to support one aspect of the broader problem(eg. the string manipulation libraries in a programming language can be unicode-aware without needing to be a full-blown word processor) and complexity is compartmentalized into a series of smaller, more manageable, parts.
Sort of a correlary of #2, maybe a lemma, duplication of the same capability(or an incompatible-but-similar-ish one) in multiple locations within the overall stack of formats and standards being used is Not Good. Aside from the mere increase in total complexity, there’s the problem of interpreting multiple, potentially contradictory or ambiguous, approaches to the same problem without making a hash of the result. If, say, a font were extended to carry ‘color’, what would the correct response be if the entire paragraph has been assigned one color at the document level; but part of that paragraph has a different color assigned at the font level? Even more fun, what if the font color definition is based on a 24-bit RGB definition, with no alpha channel; but the document editor is being used to produce CMYK output, or the font is being used on a web page that does use transparency/translucency? Things have the potential to get exciting, especially if ‘transplantation from one context to another’ is a routine occurrence.
The other major reason to say ‘stick with HTML’ is that, in practice, a substantial percentage of the different contexts are basically just other wrappers around the same browser engine. For sake of convenience, if nothing else, tweets, message board posts, emails, etc. are typically shoved straight through whatever HTML renderer is platform appropriate unless you specifically demand plain text. Even ‘apps’ and native client applications frequently use the available HTML renderer, and browser-based views obviously do. If you want advanced formatting, your email will be a web page, albeit a static one. Bulletin boards and tweeter and the like are pretty much web CMSes; but with really constrained editing features.
In a substantial majority of cases, all the technical capability required has already been inherited, more or less for free. It may not be made available(BBSes typically constrain you to a reduced set of markup in order to keep malicious users from munging the entire page, twitter gives you very little control at all, etc.); but all the tech is in place. If they can’t be bothered to add a button that lets you turn it on, when it is already there, the odds that they’d bother to implement a new set of intrinsic features to duplicate that function are not terribly promising.
I’m certainly in favor of accessibility; but format overloading is likely not the way to get it, at least not without substantial unnecessary pain.
So, what are you proposing? Should something like Discourse auto-detect when you post and image file, and then prompt you to enter alt-text (or skip)? Maybe make this a user-configurable setting for the cretins that never want to be bothered?
Based on some quick testing(above) using the ‘upload’ button to add an image produces an ordinary img tag. At present, it doesn’t have a box for alt text(though one would fit reasonably well in the window where you select either a local source or a URL); but it does respect an alt string if you add one to the img tag it creates.
That would be more or less my proposal: depending on exactly how hard you wanted to encourage it, you could either have a separate prompt, have an optional field in the upload window, or never prompt the user; but don’t interfere with an alt string if they add one.