• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle



  • Not quite the case.

    When a user on instance B subscribes to a community on instance A, instance A begins to send in real-time the posts and comments of that community to B, which keeps a local copy of that community.

    If instance B has 10 active users subscribed to that community on A, they’re all loading it from instance B. The end result is instance A only had to share each piece of content once with instance B, and instance B further shares it with the ten local subscribers, reducing the load on instance A.

    The only exception is when instance B only has a single subscriber to instance A’s community, in which case replicating the entirely of the community is more work then that user just browsing it directly on instance A.

    Tl;dr it’s most efficient for a large Lenny instance if most of its active users are on other instances.


  • I had this issue initially when my own instance, specifically with Lenny.ml.

    In my case the issue was related to my subscription status. On the remote community does it show you as subscribed or subscription pending?

    I showed subscription pending for a few hours, then I finally unsubscribed and subscribed again, and that time the subscription seemed to work correctly and commends started flowing to my instance.


  • If a large corp wants to do what you’re suggesting, they don’t need to launch a big announced project.

    They can spin up a federated instance with just one user and no references to who owns it, then have patsy accounts on other instances subscribe to their instance and get all the data they want sent to their semi secret instance.

    It would be very difficult to identify this in a large, healthy federation with tons of users and lots of small personal instances.



  • Super cool approach. I wouldn’t have guessed it would be that effective if someone had explained it to me without the data.

    I’m curious how easy it is to “defeat”. If you take an AI generated text that is successfully identified with high confidence and superficially edit it to include something an LLM wouldn’t usually generate (like a few spelling errors), is that enough to push the text out of high confidence?

    I ask because I work in higher ed, and have been sitting on the sidelines watching the chaos. My understanding is that there’s probably no way to automate LLM detection to a high enough certainty for it to be used in an academic setting as cheat detection, the false positives are way too high.








  • Raspberry Pi 4 running home assistant

    Intel NUC running frigate and a minecraft server

    Custom built PC (i3-10100, 16gb ram, GTX1070 for transcoding. 24tb array with two parity disk, 2x 3tb ssd’s in array for docker, os, etc) with quite a lot of storage running Unraid, which is my media server, backup server, and now my lemmy server.

    Network is a mikrotik Hex S router and a netgear gigabit switch, with 1gb fiber internet. 2 Ubiquity AP’s for wifi in the house.