I haven't had enough bad tech opinions recently

HTML should have been killed and replaced with XHTML and most problems with the Web nowadays can be traced back to the failure of the XHTML2 development process and the creation of WHATWG and HTML5

Follow

@alexandra At one point in my history of being a web developer, I would have agreed with you. The problem is that browser makers didn't like that malformed content was *supposed* to fail spectacularly.

Basically, they had to account for the fact that content authors were going to be idiots and that quirks mode was more like default mode.

@alexandra I tend to agree with their assessment. Having content explode on an end user because of an unclosed <p> is... suboptimal for the end user. No matter how hilarious.

@alexandra I think, as far as HTML goes, that the current structure system is fine-ish. It could do with some cleanup, but anything could have the same said of it.

There are days that I wish the browser would slap my wrist on encountering markup errors on my content, but given that HTML is still just text, and therefore easily mangled by any number of intermediaries, it’s wise to handle structure errors gracefully.

@nathand@fosstodon.org Personally I still prefer to use the XML version of HTML5 and make sure my pages go through at least one XML parser before they're published, but that's not always practical.

@alexandra for a long time I was all XHTML1.0 on everything. However poor support and flagging demand for XML rigid ness gave way to my embrace of HTML4.01, despite it being older and crustier. It had (at the time) easier semantic meaning to me and widespread support

@nathand@fosstodon.org This argument can go either way obviously, but, "my" side goes, content authors wouldn't be able to (unknowingly) be idiots if the browser they were testing with rejected malformed content. It's a self-perpetuating problem. Browsers accepting malformed content allows malformed content to be published without noticing it's wrong, which makes it harder for browsers to go without accepting malformed content.

XHTML was supposed to make a clean break by allowing browsers to know for sure whether something was possibly wobbly or whether they should insist on correctness, and by the same token allowing authors to know for sure their content would be interpreted either consistently and sensibly or not at all.

XHTML2 was meant to be another level of improvement by using XML tools (e.g. XLink) to make HTML semantically stronger and reduce the amount of scripting needed to implement useful dynamic content.

@alexandra that is absolutely fair. I would argue that more meaningful structure should have been at the core of the HTML5 project. Of course, it was co-opted so that google and friends could push more meaningless web apps, with structure bolted on later by the idea of JSON. Things could have been so much better.

I guess I’m not *unhappy* with the current markup tools, I guess I’m just sad to see Betamax fail, again.

@nathand @alexandra i don’t think the problem with html has ever been malformed content. in fact, i can tell you from my experience with maintaining rss feeds that xml can become malformed very easily and accidentally in situations where there’s a templated generator. you might suggest that xml be generated with an xml aware parser/generator but these also tend to be extremely slow complicated and full of bugs- to the point you’re better off with the templates.

@nathand @alexandra the problem with html and its complexity is really browser tolerance of elements being permitted and tolerated to nest into any context. a form can wrap a whole page. divs can wrap anything. you are not supposed to, but you can wrap <a> around anything. you can stick an iframe , style, script or an object anywhere without restriction.

javascript “on” event attributes on any element.

@nathand @alexandra it’s this that makes browsers hard to implement compatinly, and xhtml would never have fixed this; because it would still needed some level of backward compatibility with existing sites and frameworks that would not have migrated to a format that doesn’t support their use cases

@nathand @alexandra there especially was no upside to doing so. you put a bunch of work into converting your site to xhtml so it can- work exactly the same but also break completely more often. it’s not a good deal.

@alexandra @nathand i've been semi ga-ga over json-ld having an "@id" thing, which can either be a field on a json object meaning "this url is my id",
{ "@type": "boat", "@id": "example.com/blueshark"

or can be inside a property, identifying the entity within,
{ "@type": "boat", "sail": { "@id": "example.com/mainsail" } }

there are parallels in html via rdfa and microdata! which is what xlink was doing, so long ago!! giving entities in the page their own identities!!! so so capital, so core, so missing!!

very much digging the reminder about what xlink was about. thanks!

@alexandra kind of amazing that one of the chief anti-XHTML folk is still such a force in web standards,
annevankesteren.nl/2005/05/xht

Sign in to participate in the conversation
Fosstodon

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.