![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://beehaw.org/pictrs/image/c0e83ceb-b7e5-41b4-9b76-bfd152dd8d00.png)
Huh… Will this message then get re-ingested by chatgpt? Did it just poison itself?
Huh… Will this message then get re-ingested by chatgpt? Did it just poison itself?
A monopoly with checks notes 30% market share. It has a plurality, but not a majority.
misconfigured
Makes me skeptical this is a real “loophole”
The issue revolves around permissions, with GKE allowing users access to the system with any valid Google account. Orca Security said this creates a “significant security loophole when administrators decide to bind this group with overly permissive roles.”
Orca Security noted that Google considers this to be “intended behavior” because in the end, this is an assigned permission vulnerability that can be prevented by the user. Customers are responsible for the access controls they configure.
The researchers backed Google’s assessment that organizations should “take responsibility and not deploy their assets and permissions in a way that carries security risks and vulnerabilities.”
Yeah, PEBKAC
That’s not true. If you’re intentionally logged in to a website, sure, but tracking without an account requires action on the part of your browser, assuming you’re using a VPN. Cookies, ad-IDs, user agent, preferred language, etc. is all information that the browser can decide if it provides or not.
What maintenance?
I don’t see why it wouldn’t be able to. That’s a Big Data problem, but we’ve gotten very very good at searches. Bing, for instance, conducts a web search on each prompt in order to give you a citation for what it says, which is pretty close to what I’m suggesting.
As far as comparing to see if the text is too similar, I’m not suggesting a simple comparison or even an Expert Machine; I believe that’s something that can be trained. GANs already have a discriminator that’s essentially measuring how close to generated content is to “truth.” This is extremely similar to that.
I completely agree that categorizing input training data by whether or not it is copyrighted is not easy, but it is possible, and I think something that could be legislated. The AI you would have as a result would inherently not be as good as it is in the current unregulated form, but that’s not necessarily a worse situation given the controversies.
On top of that, one of the common defenses for AI is that it is learning from material just as humans do, but humans also can differentiate between copyrighted and public works. For the defense to be properly analogous, it would make sense to me that it would need some notion of that as well.
I know it inherently seems like a bad idea to fix an AI problem with more AI, but it seems applicable to me here. I believe it should be technically feasible to incorporate into the model something which checks if the result is too similar to source content as part of the regression.
My gut would be that this would, at least in the short term, make responses worse on the whole, so would probably require legal action or pressure to have it implemented.
Yup, that’s the one
I’ve found Babble to be okay
Time to add medium.com to the trash list
I already said, they can’t compete on price. Cheaper prices will always be more than free. Same with interoperability, if you have the actual file you can run on anything. Group watching already exists.
More equal promotion of shows/movies and pay distribution don’t actually help make the experience better for the consumer, that’s more relying on the consumer behaving ethically and that they believe piracy is wrong. It only helps for the people who think it was only sometimes wrong, which I don’t think is a huge group (although they are certainly the most vocal supporters of piracy)
That’s easy to say, but what can they actually do that provides a better service than piracy at this point? They can’t compete on price, number of shows, or quality of shows with piracy by a long shot. They can potentially provide a better ease of experience with quick downloads and casting, but they already have that and I don’t know that it can get any better.
As a general rule, I’d assume more piracy means less money into an industry, and less money in means fewer and less risky products that appeal to the lowest common denominator.
New Zealand strikes back after being excluded from so many maps
I’ll actually use Bing’s AI/LLM on occasion. I get frustrated in some of the conversations that come talk about the limitations of AI in generating false information that can be tracked when Bing’s does cite it’s sources if you want to fact check.
You didn’t read their question, did you? Because your quote does not answer it.
Sounds to me like those new battery technologies entered production.
I’ve seen so many “this new battery technology” articles over the past decade, I can’t bring myself to care until it enters production.
Yeah, who’s gonna say “Oh, I’m not blocking ads on YouTube, better take the time to make sure I see ads everywhere else as well.”
I used it for Stadia for a bit. Only thing I thought was an actually useful feature was instant demos. Other than that, I feel like the overlap of people who want to play games, can afford and have a really good Internet connection, but don’t want to just buy a console/computer is really small.
In my experience, it has not generated results in real time. I’ve either gotten the exact same response, or a prompt asking “would you like to generate an AI response to your search?”
So it seems like, and would make sense, that in a given time period they only generate a response once per given search, and reuse that response in the future, since that’s far more efficient