• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 days ago

    A spokesperson for Meta said in a statement the company is “disappointed” and insists its method “complies with privacy laws and regulations in Brazil.”

    Yeah, just like my cat complies with the policy of leaving my furniture alone. You aren’t fooling anyone, Meta.

    “This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” the spokesperson added.

    Cut off the bullshit. Only Meta itself will reap the benefits of this sort of rubbish.


    The relevant organ behind this decision mentioned in the article in the OP is the ANPD (Autoridade Nacional de Proteção de Dados, “National Authority of Data Protection”).

    Additionally, the Senacon (Secretaria Nacional do Consumidor, roughly “Customers’ National Secretary”) is also going after Meta and demanding it to clarify:

    • their usage of customer data to train AI;
    • the purpose of the above;
    • its impact on customers;
    • the data usage information policy being adopted;
    • [which, if they exist] support channels that allow customers to exercise their rights [in this regard]

    I think that this is actually a bigger deal than what the ANPD did. It basically means that the customer’s protection entities in Brazil aren’t really buying Meta’s bullshit about “chrust us we have legitimare inrurrest”.

    Source, in Portuguese.

    Another relevant tidbit is that, when it comes to privacy, data, and internet, Brazilian organs’ typical modus operandi is “copypaste what’s being done in Europe”. And lots of European governments “happen” to be rather pissed at those megacorps.


    Perhaps now I can convince my relatives to use Matrix instead of that disgusting shit called zapzap WhatsApp.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    This is the best summary I could come up with:


    The decision stems from “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in the nation’s official gazette.

    Meta did not provide sufficient information to allow people to be aware of the possible consequences of using their personal data for the development of generative AI, it added.

    Human Rights Watch released a report last month that found that personal photos of identifiable Brazilian children sourced from a large database of online images — pulled from parent blogs, the websites of professional event photographers and video-sharing sites such as YouTube — were being used to create AI image-generator tools without families’ knowledge.

    Hye Jung Han, a Brazil-based researcher for the rights group, said in an email Tuesday that the regulator’s action “helps to protect children from worrying that their personal data, shared with friends and family on Meta’s platforms, might be used to inflict harm back on them in ways that are impossible to anticipate or guard against.”

    But the decision regarding Meta will “very likely” encourage other companies to refrain from being transparent in the use of data in the future, said Ronaldo Lemos, of the Institute of Technology and Society of Rio de Janeiro, a think-tank.

    “Meta was severely punished for being the only one among the Big Tech companies to clearly and in advance notify in its privacy policy that it would use data from its platforms to train artificial intelligence,” he said.


    The original article contains 590 words, the summary contains 248 words. Saved 58%. I’m a bot and I’m open source!