The year is 2019 and I can’t buy a good majority of consumer technology because we lack privacy legislation and consumer protections. Example: it’s absurd that my TV came with spyware that can’t be turned off or avoided; I had to stop it from phoning home at the network level. It also came with an arbitration clause and a clause waiving the right to a class action lawsuit.
i think the problem is not lack of legislation. the tech monopoly of big corps exists because people bought it. they sold their privacy for convenience and trendy blinking lights. furthermore, it is impossible for lawmakers to understand new technologies and to do specific laws for each new tech trap and it is impossible to stop the stupidity from people with the "it is ok, i have nothing to hide" mindset.
We don't expect people to be experts in chemistry and food safety in order for them not to get poisoned by food they buy. This is called food safety standards.
And yet we expect people will become tech and legal experts, reading through endless EULAs and understanding the fine print, and then being able to verify the tech behind it, for them to be able to protect their basic privacy?
@hansbauer @retrohacker legislators were able to create food safety standards that make getting poisoned by store-bought food impossibly unlikely. They were able to create regulations around medicines that make it highly unlikely for people to get poisoned by actual, you know, poisons (every medicine is poison in the right amount).
We can, and should, expect legislators to step in and regulate the IT industry.
Market will not solve it.
I've found this language helpful for thinking about some aspects of some of these problems:
There is some value in having people be the ultimate arbiters of what goods and services they buy.
But, to get reasonably safe and good things, we need the support of experts. And we need those experts to do their work on our behalf.
But it is *not* an independent decision if the person is misinformed or does not have enough information to make an informed decision.
Legislation is needed (among other things) to create a baseline of quality of information about stuff that matches the baseline expectations of people.
I also want to point out that expecting people to 100% advocate for themselves in terms of tech and privacy is a privileged and even ableist position. Not everyone who gives in, does so out of laziness, convenience, or even ignorance. Some genuinely have few options.
@deejoe @hansbauer @retrohacker
@erosdiscordia @rysiek @deejoe @hansbauer @retrohacker
This is matter of education: it's totally possible to teach programming, networking and crypto before 13. Why we don't? Because many people don't even understand they are used not users.
I'm totally for regulations, but I'm scared by the incompetence of Politicians, even in Europe.
What I read on #AI looks scary: they totally misunderstand what it is, how it works and can be abused.
Regulating AI seems nonsensical to me. I'm not sure we want to regulate industries or technologies. I'm pretty sure we want to regulate behaviors.
Start with human rights and work out implications. The limits imposed on industry and technologies are derived from the human rights they aren't allowed to infringe on. It's not "you are allowed to use AI in these ways" its "no technology or person can infringe on the right [of/to]"
1. the derivation should be logical
2. you must understand the topic
People and experts talking about "bias" or "non determinism" (of software executed on deterministic machines) show what can omly be either a deep incompetence or a malicious lobbying.
Regulating AI should be simple:
- forbid opaque boxes application to human data
- always held a human accountable
> forbid black boxes
I’m not sure this needs to be a regulation. Folks are free to do what they want with the tech they build, but they are responsible for the actions it takes. It’s risk management. If they aren’t able to comprehend the system they built, they are accepting that they may be found guilty of crimes that system commits. The decision to not use black boxes is easily derived from liability assuming we have balanced legislation.
With black boxes you need an enormous number or similar damages to prove a crime occurred and corporations will brag about industry standards to protect their interests.
Several people already died, killed by #SelfDrivingCars and no #CEO have gone to jail for the murder. OTOH you would need an enormous number of people from a minority discriminated to prove an AI software is wrong.
#Transparency means that each and evert error must be fully detectable, reproducible and easy to debug (aka explain plenty and clearly).
Without both of these principles, the rich will be above the Law by using an autonomous proxy.
Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.