5 Comments

Did labor take crib notes from the movie V for Vendetta?

Misinformation Bill BS came from the Covid years. Eerily similar eh?

Now the +16yrs for social media?

Did they watch the movie until the end? Cos it did not go well for the Government.

Expand full comment

F'n ayyyyy! 👍

Get a mullet up ya, gubment.

Expand full comment

WEFers gonna WEF....

Expand full comment

>>The Greens want “comprehensive reforms that tackle the business models and dangerous algorithms that fuel division and damage democracy, and legislate a duty of care so these platforms prevent harm in the first place,” said Senator Hanson-Young. <<

Great to see that you are quoting the intellectual heavy weights, Rebekah:):):)

What on earth does that statement of hers mean? The phrases 'verbal diarrhoea' and 'word salad' come to mind here.:)

"tackle the business models and dangerous algorithms"

Such great, unambiguous modern jargon.:)

"... that fuel division and damage democracy"

Are we to believe that online activity that fuels division is necessarily harmful and/or undemocratic ... that democracy demands everyone has the same belief set?:)

"... duty of care"?

Does Telstra have a duty of care to its users that its network not be used in harmful ways?:)

Btw, I am heavily inclined to believe that so-called 'influencers' do not cause people to change their minds. They merely tap into subsets of the community whose members have already developed hardened perceptions of life and with whom the provided material resonates. They are feeding a market what it wants and making a living out of the same.

Expand full comment
author

So there would be two parts to what I think the Greens want as regards algorithms.

1. For platforms to have to make their algos transparent. I agree with this in principle.

2. For platforms to have a 'duty of care' to ensure their algos and features aren't driving harmful effects (presumably like ensuring eating disorder content is not recommended to adolescents, etc). I assume they're thinking of it as analogous to Disneyland having to ensure standards of safety on their rides, or something like that. But as we're in the realm of vaguely defined mental, emotional and social harms/care, this seems harder to achieve. I would be interested to read more on the specifics of how they think that should work (I am part way through the social media inquiry report on that front).

Expand full comment