Is this within a config file for ntfy? When I access it through local host it tells me it needs to be accessed through https. Which I setup through cloudflare tunnels and the error went away.
There was no config file generated for my docker container for ntfy.
Edit: I have fixed this error but notifications still do not work.
I was able to get it working with this docker compose!! Thank you!
I can now sign inyo element x and schildi next! The only problem is none of my chats are showing up and when I create a new room it dissapears as soon as I back out into the chats tab.
Any idea on how to fix it? Do I need a seperate sub domain for the sliding sync proxy?
I am using cloudflare tunnels as of right now. I would be very appreciative if I could take a look at your settings!
THANK YOU ALL!
It was a problem with my docker compose file! I didn’t list the needed devices from the jellyfin documentation. I thought the Container was detecting the gpu but it wasn’t. Docker exec nvidia-smi is your friend!
Edit: so now it doesnt kick me out saying the playback failed but its just a black screen with 4k media
Edit 2: my bad forgot to enable some transcoding settings in jellyfin lol
This is what thay compose looks like now:
services:
jellyfin:
image: jellyfin/jellyfin
user: 1000:1000
network_mode: 'host'
volumes:
- /DATA/AppData/jellyfin/config:/config
- /DATA/AppData/jellyfin/cache:/cache
- /DATA/AppData/jellyfin/media:/media
- /mnt/drive1/media:/mnt/drive1/media
- /mnt/drive2/Jellyfin:/mnt/drive2/Jellyfin
- /mnt/drive3:/mnt/drive3
- /mnt/drive4/media:/mnt/drive4/media
- /mnt/drive5/jellyfin:/mnt/drive5/jellyfin
- /mnt/drive6/jellyfin:/mnt/drive6/jellyfin
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: cdi
device_ids:
- nvidia.com/gpu=all
- /dev/nvidia-caps:/dev/nvidia-caps
- /dev/nvidia0:/dev/nvidia0
- /dev/nvidiactl:/dev/nvidiactl
- /dev/nvidia-modeset:/dev/nvidia-modeset
- /dev/nvidia-uvm:/dev/nvidia-uvm
- /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools
count: all
capabilities: [gpu]
Edit: when I try and compose up it says “yaml: lin 30 mapping values are not allowed in this context” when I remove line 30 and 31 the output is “validating /DATA/AppData/jellyfin/docker-compose.yml: services.jellyfin.deploy.resources.reservations.devices.1 must be a mapping”
I tried this and it says:
OCI runtime exec failed: unable to start container process: exec: “nvidia-smi”: executable file not found im $PATH: unknown
I ran it as two commands instead of one before and still got that error message.
However, I tried again with a different jellyfin image and the command seems to have ran fine.
Here is a pic of my nvidia-smi output:
I followed this guide and seemed to get it working.
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
However jellyfin transcoding sttill doesn’t work. I have tried adding the “nvidia devices= all” environment variable, it still didn’t work.
I tried using the docker conpose from here
But when I try and run this command: “docker exec -it jellyfin ldconfig sudo systemctl restart docker”
It says the container is restarting and to try again when the container has started.
Please explain how I am spamming when my last post in this community was 3 days ago?
I find this resource very helpful, I am trying to learn.
My bad it is now fixed
I set it up multiple different times in different locations:
/home/inventory/host_vars/matrix.bishbash.com
/home/inventory/hosts
/home/matrix-docker-ansible-deploy/inventory/host_vars/
/Home/matrix-docker-ansible-deploy/inventory/hosts
/desktop/ansible playbook/matrix-docker-ansible-deploy/inventory/host_vars
/desktop/ansible playbook/matrix-docker-ansible-deploy/inventory/hosts
I set it up multiple different times in different locations:
/home/inventory/host_vars/matrix.bishbash.com
/home/inventory/hosts
/home/matrix-docker-ansible-deploy/inventory/host_vars/
/Home/matrix-docker-ansible-deploy/inventory/hosts
/desktop/ansible playbook/matrix-docker-ansible-deploy/inventory/host_vars
/desktop/ansible playbook/matrix-docker-ansible-deploy/inventory/hosts
# The bare domain name which represents your Matrix identity.
# Matrix user IDs for your server will be of the form (`@alice:example.com`).
#
# Note: this playbook does not touch the server referenced here.
# Installation happens on another server ("matrix.example.com", see `matrix_server_fqn_matrix`).
#
# If you've deployed using the wrong domain, you'll have to run the Uninstalling step,
# because you can't change the Domain after deployment.
matrix_domain: matrix.bishbash.com
# The Matrix homeserver software to install.
# See:
# - `roles/custom/matrix-base/defaults/main.yml` for valid options
# - the `docs/configuring-playbook-IMPLEMENTATION_NAME.md` documentation page, if one is available for your implementation choice
#
# By default, we use Synapse, because it's the only full-featured Matrix server at the moment.
#
# Note that the homeserver implementation of a server will not be able to be changed without data loss.
matrix_homeserver_implementation: synapse
# A secret used as a base, for generating various other secrets.
# You can put any string here, but generating a strong one is preferred (e.g. `pwgen -s 64 1`).
matrix_homeserver_generic_secret_key: 'I_put_my_actual_key_here'
# By default, the playbook manages its own Traefik (https://doc.traefik.io/traefik/) reverse-proxy server.
# It will retrieve SSL certificates for you on-demand and forward requests to all other components.
# For alternatives, see `docs/configuring-playbook-own-webserver.md`.
matrix_playbook_reverse_proxy_type: playbook-managed-traefik
# This is something which is provided to Let's Encrypt when retrieving SSL certificates for domains.
#
# In case SSL renewal fails at some point, you'll also get an email notification there.
#
# If you decide to use another method for managing SSL certificates (different than the default Let's Encrypt),
# you won't be required to define this variable (see `docs/configuring-playbook-ssl-certificates.md`).
#
# Example value: someone@example.com
traefik_config_certificatesResolvers_acme_email: ''
# A Postgres password to use for the superuser Postgres user (called `matrix` by default).
#
# The playbook creates additional Postgres users and databases (one for each enabled service)
# using this superuser account.
postgres_connection_password: 'I_made_a_password_here'
# By default, we configure Coturn's external IP address using the value specified for `ansible_host` in your `inventory/hosts` file.
# If this value is an external IP address, you can skip this section.
#
# If `ansible_host` is not the server's external IP address, you have 2 choices:
# 1. Uncomment the line below, to allow IP address auto-detection to happen (more on this below)
# 2. Uncomment and adjust the line below to specify an IP address manually
#
# By default, auto-detection will be attempted using the `https://ifconfig.co/json` API.
# Default values for this are specified in `matrix_coturn_turn_external_ip_address_auto_detection_*` variables in the Coturn role
# (see `roles/custom/matrix-coturn/defaults/main.yml`).
#
# If your server has multiple IP addresses, you may define them in another variable which allows a list of addresses.
# Example: `matrix_coturn_turn_external_ip_addresses: ['1.2.3.4', '4.5.6.7']`
#
#matrix_coturn_turn_external_ip_address: '' ```
# The bare domain name which represents your Matrix identity.
# Matrix user IDs for your server will be of the form (`@alice:example.com`).
#
# Note: this playbook does not touch the server referenced here.
# Installation happens on another server ("matrix.example.com", see `matrix_server_fqn_matrix`).
#
# If you've deployed using the wrong domain, you'll have to run the Uninstalling step,
# because you can't change the Domain after deployment.
matrix_domain: matrix.bishbash.com
# The Matrix homeserver software to install.
# See:
# - `roles/custom/matrix-base/defaults/main.yml` for valid options
# - the `docs/configuring-playbook-IMPLEMENTATION_NAME.md` documentation page, if one is available for your implementation choice
#
# By default, we use Synapse, because it's the only full-featured Matrix server at the moment.
#
# Note that the homeserver implementation of a server will not be able to be changed without data loss.
matrix_homeserver_implementation: synapse
# A secret used as a base, for generating various other secrets.
# You can put any string here, but generating a strong one is preferred (e.g. `pwgen -s 64 1`).
matrix_homeserver_generic_secret_key: 'I_put_my_actual_key_here'
# By default, the playbook manages its own Traefik (https://doc.traefik.io/traefik/) reverse-proxy server.
# It will retrieve SSL certificates for you on-demand and forward requests to all other components.
# For alternatives, see `docs/configuring-playbook-own-webserver.md`.
matrix_playbook_reverse_proxy_type: playbook-managed-traefik
# This is something which is provided to Let's Encrypt when retrieving SSL certificates for domains.
#
# In case SSL renewal fails at some point, you'll also get an email notification there.
#
# If you decide to use another method for managing SSL certificates (different than the default Let's Encrypt),
# you won't be required to define this variable (see `docs/configuring-playbook-ssl-certificates.md`).
#
# Example value: someone@example.com
traefik_config_certificatesResolvers_acme_email: ''
# A Postgres password to use for the superuser Postgres user (called `matrix` by default).
#
# The playbook creates additional Postgres users and databases (one for each enabled service)
# using this superuser account.
postgres_connection_password: 'I_made_a_password_here'
# By default, we configure Coturn's external IP address using the value specified for `ansible_host` in your `inventory/hosts` file.
# If this value is an external IP address, you can skip this section.
#
# If `ansible_host` is not the server's external IP address, you have 2 choices:
# 1. Uncomment the line below, to allow IP address auto-detection to happen (more on this below)
# 2. Uncomment and adjust the line below to specify an IP address manually
#
# By default, auto-detection will be attempted using the `https://ifconfig.co/json` API.
# Default values for this are specified in `matrix_coturn_turn_external_ip_address_auto_detection_*` variables in the Coturn role
# (see `roles/custom/matrix-coturn/defaults/main.yml`).
#
# If your server has multiple IP addresses, you may define them in another variable which allows a list of addresses.
# Example: `matrix_coturn_turn_external_ip_addresses: ['1.2.3.4', '4.5.6.7']`
#
#matrix_coturn_turn_external_ip_address: '' ```
No I was deploying regular conduit. Now I am trying conduwuit and when I try and connect it says it doesn’t support sliding sync. I cannot seems to find it referenced in the config file either.
EDIT: nvm I just read it is not implemented yet on conduwuit. 🥲 kind of a dealbreaker because I am trying to get element x working for group calling.
Its a joke, in that clip he says “two steps ahead, I am always two steps ahead” it was viral for a while.
I would love to see what futo would come uo with in this department
My bad sorry should have thought about making an official matrix account and testing there. Based off of what I can tell my ntfy container is working because it works flawlessly with an official matrix account.
That leaves me with two ideas so far, there is something wrong with matrix dendrite container, or my vps coturn server (which I forgot to mention). It looks like traffic is coming through just fine on my co turn server though. I am curious if this is a firewall issue with my co turn server. That would make the most sense given that element call is also not working on element x.
It’s weird though because calling works on seperate networks just fine so I had assumed that my co turn server just worked. Odd.
Edit: I think it is the turn server even though my calls are going through. I went to my maytrix url with “/_matrix/client/r0/voip/turnServer” to diagnose webrtc and it says “errcode: ‘M_MISSING_TOKEN’” "error: ‘missing access token’ "