• 0 Posts
  • 26 Comments
Joined 10 months ago
cake
Cake day: September 19th, 2023

help-circle












  • You’re absolutely right that I’m not an expert on neither psychology nor sociology. But I’m not basing my opinions on popular culture either. My opinions were mostly formed from years ago when I had to do a month research and write my exam in Examen Philosophicum during my study for a Master of Science, where I wrote about the history of scientific theory.

    It was mostly when I read about Karl Popper and his criterion of falsifiability that I stumbled upon the “science” of psychology. Other than that, my impression of psychology mostly comes from living with a psychology student for three years and hearing about her studies.

    A science without hard facts isn’t much of a science from my point of view. You can’t reproduce or simulate the minds of people thousands of times. There’s so many variables factoring in when you’re researching, or diagnosing. How do you separate the researcher’s emotions and personal interpretation from the “objective” facts of a person’s psyche.

    @[email protected] Implying I’m a troll isn’t really a great form of argument. Feel free to type your whole comment again and I’ll read it with an open mind.

    @[email protected] I’m not trying to be anti science, I still think it’s a worthwhile field of study, I just don’t think it’s fits the criteria of science. Feel free to show me wrong.







  • Frankly, for anything other than real-time encoding, I don’t actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.

    Encoding speed heavily depends on your preset. Veryslow will give you better compression than medium or fast, but at a heavy expense of encoding speed. You’re not gonna re-encode a movie overnight on slow preset. GPU encoding will also give you worse result than CPU encode so that’s something one would have to take into consideration. It’s not a big deal when you’re streaming, but if it’s for video files, I’d much prefer using the CPU.

    I consider the ‘good enough’ level to be, if I didn’t pixel peep, I couldn’t tell the difference. The visually lossless levels were the first crf levels where I couldn’t tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.

    I was mostly talking about how you organised your table by using CRF values as the rows. It implies that one should compare the results in each row, however that wouldn’t be a comparison that makes much sense. E.g. looking at row “24” one might think that av1 is less effective than h264/5 due to greater file size, but the video quality is vastly different. A more “informative” way to present the data might have been to organise each row by their vmaf score.

    Hopefully I don’t come across as too cross or argumentative, just want to give some feedback on how to present the data in clearer way for people who aren’t familiar with how encoding works.