Many Lemmy users wish their niche communities would become more populated. For goals like that, the reputation of the platform is important. I also don’t want to get into defensive debates when revealing to someone that I use Lemmy.
Many Lemmy users wish their niche communities would become more populated. For goals like that, the reputation of the platform is important. I also don’t want to get into defensive debates when revealing to someone that I use Lemmy.
Please try to form coherent sentences so I don’t have to guess what you might have meant. Or don’t. After all, it’s you who wants something here.
im talking about the 7 downvotes i have
i didn’t get 7 negative replies
Sounds as if you think you should get a reply for each vote.
Is that an attitude of expectation? No one is obliged to explain their voting behavior to you.
We can suggest to voters to also leave a comment, explaining how that would lead to an overall more pleasant experience. But you cannot demand replies.
I can imagine some of the downvotes of this comment come from this attitude, which can seem entitled and inappropriate.
Good idea! Checked and found: https://github.com/LemmyNet/lemmy/issues/3042
Add a “disable inbox replies” option of posts and comments #3042
You can support the issue by leaving a thumbs up or a comment.
Copy thread url, paste it into an incognoto mode browser that’s not logged into the instance.
I think you get the same result using the colorful button, which each post and comment has:
Ah, thank you.
No, grouping by tag is currently not supported.
The feature request pops up frequently, and there are corresponding open issues on GitHub.
I would also like to have this. I’m not sure how soon if ever it gets implemented.
I understand this is not your intent, but the question might help clarify your intent:
Why don’t you visit a specific community which is dedicated to your keyword of interest? How would that be different from what you want?
There’s no general answer, it depends on your personal preference.
If you want to have most content available, register on an instance which has an according policy; which federates with anybody and is federated by everybody (both directions can make a difference).
The downside however is, this also opens the door to all sorts of bad actors, including bots and spam.
So I personally tried to strike a balance and am so far quite happy on lemm.ee.
This tool is pretty handy to make informed decisions: https://fba.ryona.agency/ It allows you to check federation status both ways.
There should be a place to document all the nuance around hosting an instance plus some tips and tricks.
The Wiki: https://joinfediverse.wiki/What_is_Lemmy%3F
Hopefully it gets new contributors and maintainers from all the new users.
Thanks for the write-up!
If you want to see which other instance blocks yours to inform them of the changes you made: https://fba.ryona.agency/?domain=lemmy.ninja
This is silly.
The article is an anecdote about one incompetent user using a new tool; ChatGPT.
He uses the wrong tool for what he’s trying to accomplish, finding sources. The free version of ChatGPT cannot search the internet and has no internal fact memory as he seems to wrongly assume.
So he, like many others, runs into hallucinations.
Then he jumps to conclusions:
How much weight does this assessment or article have?
People who better understand what they can expect from a LLM, and who are willing to invest a tad more time into learning how to use a new tool well, will of course produce better results.
If you want a LLM which can find sources, use a LLM which can find sources. Use the paid ChatGPT 4.0, Bing AI or perplexity.ai.
Like all tools which are used well, they become a productivity multiplier, which naturally means less workforce is required to do the same work. If your job involves text, and you refuse to learn how to use state of the art tools, your job is probably not that safe. Yes, maybe “for the next week or so”, but AI development did not stop, so what does that help. You’re not going to be replaced by AI, but by people who learned how to work with AI.
Here’s a paper on the topic, which comes to vastly different conclusions than this anecdotal opinion piece: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
You can upload it to https://www.chatpdf.com/ to get summaries or ask questions.
Oh, I wasn’t aware. Coming from lemmy, the correct relative link to a kbin instance would be /m/. But since the community lives in a lemmy instance, it is /c/. Apparently coming from kbin you need /m/ in both cases. I added a link for both now, thanks.
The article complains the usage of the word “hallucinations” would be …
feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it’s just a matter of time, but there are good arguments for both sides.
I find the term “hallucinations” fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests “algorithmic junk”, or “glitches” instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. “Hallucinations” is a pretty good term for that job, and also already established.
The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.
Reading the rest of the article required a considerable amount of goodwill on my part. It’s a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.
I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.
I believe AI could help us create a better world (in the large scopes of the article), but I’m afraid it won’t. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.
On the other hand, we haven’t found a solution to alignment and control problem, and aren’t certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.
The challenges to economy and society along the way are profound, but I’m afraid that pales in comparison to the end game.
the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.
Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.
The estimates and worries concern a potential future, if we keep improving AI, which we do.
This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.
saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.
The economic incentives to churn out the next powerful beast as quickly as possible are obvious.
Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.
We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.
So you would be right if we would approach the topic with a rational overhead view, but we don’t.
The community for the app:
I’m also happily using it, it’s the only app which works on my old phone, haha :D
Not sure if social media in general has failed. That particular point can be solved at the community level.
Create or join a community which by it’s guidelines restricts posting paywalled or otherwise bad content. Which explicitly encourages posting “liberated” content. Have moderation. Problem solved. Moderators will remove all which you dislike. All that remains is the solution you want.