

Tldw: guy tests the RX 6800 at 1080p, 1440p and 4k across 19 games on Windows 11 vs Nobara 41.
Allegedly, nobara beats windows on all games except 2 (witcher 3 and CS2), across almost all resolutions, by around single digit percents.
Just a stranger trying things.
Tldw: guy tests the RX 6800 at 1080p, 1440p and 4k across 19 games on Windows 11 vs Nobara 41.
Allegedly, nobara beats windows on all games except 2 (witcher 3 and CS2), across almost all resolutions, by around single digit percents.
Not impossible that it is a coincidence but the probability that it is just fell way lower.
If the probability that it a 5090 experiences this issue is 1%, then the probaility that both devices fail is 0.01%. Not impossible, but very unlikely.
Regarding photos, and videos specifically:
I know you said you are starting with selfhosting so your question was focusing on that, but I would like to also share my experience with ente which has been working beautifully for my family, partner and myself. They are truly end to end encrypted, with the source code available on github.
They have reasonable prices. If you feel adventurous you can actually also host it yourself. They have advanced search features and face recognition which all run on device (since they can’t access your data) and it works very well. They have great sharing and collaborating features and don’t lock features behind accounts so you can actually gather memories from people on your quota by just sharing a link. You can also have a shared family plan.
Ollama is very useful but also rather barebones. I recommend installing Open-Webui to manage models and conversations. It will also be useful if you want to tweak more advanced settings like system prompts, seed, temperature and others.
You can install open-webui using docker or just pip, which is enough if you only care about serving yourself.
Edit: open-webui also renders markdown, which makes formatting and reading much more appealing and useful.
Edit2: you can also plug ollama into continue.dev, an extension to vscode which brings the LLM capabilities to your IDE.
I have numerous files which I am intentionally maintaining to improve seeding availability but I’ve always been bothered by how little they seed. Yet somehow while those same files are downloaded, seeding is great. Is this also a case of port forwarding being to blame? I do not have it enabled.
I’m pretty certain these kind of commitments would almost never see the light of day, all the way through, in public companies. Most companies give it a shot and if its not generating profits after 3 years, they kill it and the CEO gets fired.
Seems the chapter for Jellyfin has been “coming soon” for 3 years, too bad.
I’m not saying it’s not true, but nowhere on that page is there the word donation. And if it is, the fact that it is described and a license, tied to a server or a user causes a lot of confusion to me, especially when combined with the fact that there is no paywall but that it requires registration.
Why use the term license, server and user? Why not simply say donation and with the option of displaying the support by getting exclusive access to a badge like signal does?
Again, I’m very happy immich is free, it is great software and it deserves support but this is just super confusing to me and the buy.immich.app link does not clarify things nor does that blog post.
Edit: typo
Hi and thank you so much for the fantastic work on Immich! I’m hoping to get a chance to try it out soon, with the first stable release!
One question on the financial support page: is it not a donation? There is a per server and a per user purchase, but I thought immich was exclusively self hosted, is it not? Or is this more like a way to say thanks while giving some hints as to how immich is being used privately? Or is there a way to actually pay to have immich host a server for one?
Thanks for clarifying!
What exact GPU are you considering? While GPUs like the 3060/4060 technically support raytracing, they will not offer a sufficiently comfortable experience to make use of it. You’ll definitely have to consider a more powerful GPU. So if you are in the market for a sub-500$ GPU, you can simply ignore raytracing. And that is even more true the higher up in resolution you go.
Here you can witness the drop in performance (on average about 30%) due to raytracing on high end cards, at high resolution with dlss/fsr: https://www.youtube.com/watch?v=qTeKzJsoL3k
Just reading about Kirby brings back the game’s gameboy theme in my head
This is the way.
I hear you, but how much time was Synology given? If it was no time at all (which it seems is what happened here??), that does not even give Synology a chance and that’s what I’m concerned with. If they get a month (give or take), then sure, disclose it and too bad for them if they don’t have a fix, they should have taken it more seriously, but I’m wondering about how much time they were even given in this case.
Was it that the talk was a last minute change (replacing another scheduled talk) so the responsible disclosure was made in a rush without giving synology more time to provide the patch before the talk was presented?
If so, who decided it was a good idea to present something regarding a vulnerability without the fix being available yet?
I’m not sure, I read that ZFS can help in the case of ransomware, so I assumed it would extend to accidental formatting but maybe there’s a key difference.
I think these kind of situations are where ZFS snapshots shine: you’re back in a matter of seconds with no data loss (assuming you have a recent snapshot before the mistake).
Edit: yeah no, if you operate at the disk level directly, no local ZFS snapshot could save you…
I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.
I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.
If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.
Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B
Edit2: added links
Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.
Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.
Sure, anytime, create a new post, tag me if you need me specifically to have a look. I’ve used docker on synology for years, have gone through major updates and while I’m certainly no expert, I’ve learned some things which could be helpful.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?