Compare commits

...
This repository has been archived on 2025-08-14. You can view files and clone it, but you cannot make any changes to it's state, such as pushing and creating new issues, pull requests or comments.

41 commits

Author SHA1 Message Date
strawberry
bc49d038f1 improvement: registration token now only works when registration is enabled
Co-authored-by: Timo Kösters <timo@koesters.xyz>
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 22:42:59 -05:00
strawberry
70d0c57ac7 update DIFFERENCES.md
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 22:04:33 -05:00
strawberry
8f4882ac00 add cargo clippy
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 22:04:26 -05:00
strawberry
f4b50d2c1d update nix flake for gitlab CI building docker images
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 21:52:42 -05:00
strawberry
54887fd0cf fix room ID messages, remove comments
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 21:09:07 -05:00
strawberry
3a80e32ba1 assume well-known is None if text length exceeds 10000 chars
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 21:07:53 -05:00
Charles Hall
5de3fb4502 move resolver logic into the resolver
Honestly not sure why it wasn't done like this before. This code is much
less awkward to follow and more compartmentalized.

These changes were mainly motivated by a clippy lint triggering on the
original code, which then made me wonder if I could get rid of some of
the `Box`ing. Turns out I could, and this is the result of that.

Co-authored-by: strawberry <strawberry@puppygock.gay>
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 18:41:03 -05:00
strawberry
916e160148 use both is_ip_literal and IPAddress is_valid checks
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 18:34:41 -05:00
strawberry
a1869444f5 just remove double quotes if found instead
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 18:16:25 -05:00
strawberry
99e72f8f63 custom room ID checks, dont use format! macro due to quotes being added
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 16:48:10 -05:00
strawberry
fa56f48ff2 check if room ID already exists instead of erroring on auth check
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 16:45:09 -05:00
strawberry
3d3389e946 additional character check on room alias
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 16:44:37 -05:00
strawberry
9fc0a3ccc4 update DIFFERENCES.md
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 15:51:33 -05:00
strawberry
d1d37d148a IP range denylist logging, and fix logic error
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 15:51:13 -05:00
strawberry
b8d3fb89fb add custom room ID support using room_id field
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:21:18 -05:00
strawberry
d82b9f3ec4 move room creation config check higher up
dont bother wasting resources if we know we
arent even allowed to make the room to begin with

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:18:49 -05:00
strawberry
c33a35e821 dont crash failing to deserialise room creation content
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:17:54 -05:00
strawberry
cef8fe6f6e add error checking to room aliases
length, colon, and spaces. also dont crash.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:17:12 -05:00
strawberry
ef225e86c4 remove random space
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:15:49 -05:00
strawberry
f61b5c7fa9 send home_server on login response again
a 6+ year old deprecated field that isnt even spelled
right, and no clients use it must still be sent
according to spec

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:15:23 -05:00
strawberry
0b65a362ae update deps
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:12:32 -05:00
strawberry
69e3de5aaa delete Dockerfile
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-24 12:12:21 -05:00
strawberry
3433e6ef66 don't send requests to specified list of IP CIDRs
this can most definitely be improved but this is a decent attempt.
the only annoying this is i couldn't just use a Vec<IPAddress> which
would have significantly simplified all of this, but serde can't
deserialise it on the config side i guess.

i may find a better way to do this in the future, but this should cover
most areas anyways.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-22 00:00:35 -05:00
strawberry
6199e438af oops forgot that endpoint too
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 20:34:31 -05:00
strawberry
bbf7387403 eat less of client parameters for media requests
still cantt respect allow_redirect yet

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 20:30:39 -05:00
strawberry
06ca17adf3 switch back to expect for sender_user
as far as i can tell, it will return a normal
error in the auth token handling code so this is fine.
we also shouldnt assume all errors from this are
access_token related.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 20:30:09 -05:00
strawberry
cfe338081c match explicit URI to see if we should authenticate the user
first attempt at forcing an endpoint to be authenticated

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 18:18:21 -05:00
strawberry
6ab2a1c56f update DIFFERENCES.md so far
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 18:17:29 -05:00
strawberry
dbd0959706 bump ruma, rusqlite, and rocksdb
latest rocksdb now has WriteBufferManager support

i hope no one is using sqlite with conduwuit, but if they are let's
bump it to latest git too for the latest sqlite version available.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 16:05:38 -05:00
strawberry
f48b5ae49f use ruma JsOption, bump figment
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 15:43:42 -05:00
strawberry
88d6d7e856 add warning about outgoing presence PDU/EDU relationship
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-21 13:45:14 -05:00
strawberry
973c4fe31d use engage for gitlab CI
from https://gitlab.com/famedly/conduit/-/merge_requests/564

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 22:49:21 -05:00
strawberry
ce24b509cf return joined member count of room for pushrules instead of hardcoded 10
im not sure what the TODO is trying to say here,
but since it's many years old and conduwuit is
fast, i dont see an issue with this.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 19:51:39 -05:00
strawberry
13af7e30ad silence loud tower_http errors (move to info)
these are benign errors that are from things like
conduwuit fetching remote media from dead servers

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 19:03:40 -05:00
strawberry
968e6e6100 support sending well_known client response in /login using well_known_client
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 19:02:44 -05:00
strawberry
fd92ffa9b4 send avatar_url on invite member events like synapse
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 12:44:17 -05:00
strawberry
7034693c42 fix obvious copy-paste error
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-20 12:37:29 -05:00
strawberry
8622ceb1fc add conduwuit-example.toml (new example config)
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-18 01:08:25 -05:00
strawberry
ba49a22f22 revamp example config, document lots of config options
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-18 01:07:29 -05:00
strawberry
d63ded74a0 make warning and slight changes to DEPLOY.md for conduwuit
this is not finished yet

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-18 01:07:15 -05:00
strawberry
d93efb2d3a remove rocksdb_max_open_files option
default for RocksDB is -1 and conduwuit already raises the
soft and hard nofile limits at startup.

Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-01-18 01:06:45 -05:00
29 changed files with 919 additions and 732 deletions

5
.gitignore vendored
View file

@ -70,4 +70,7 @@ cached_target
/.direnv
test-conduit/
test-conduit.toml
test-conduit.toml
# Gitlab CI cache
/.gitlab-ci.d

View file

@ -1,242 +1,67 @@
stages:
- build
- build docker image
- test
- upload artifacts
- ci
- artifacts
variables:
# Make GitLab CI go fast:
GIT_SUBMODULE_STRATEGY: recursive
FF_USE_FASTZIP: 1
CACHE_COMPRESSION_LEVEL: fastest
# Makes some things print in color
TERM: ansi
# --------------------------------------------------------------------- #
# Create and publish docker image #
# --------------------------------------------------------------------- #
before_script:
# Enable nix-command and flakes
- if command -v nix > /dev/null; then echo "experimental-features = nix-command flakes" >> /etc/nix/nix.conf; fi
.docker-shared-settings:
stage: "build docker image"
needs: []
tags: [ "docker" ]
variables:
# Docker in Docker:
DOCKER_BUILDKIT: 1
image:
name: docker.io/docker
services:
- name: docker.io/docker:dind
alias: docker
# Add nix-community binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://nix-community.cachix.org" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" >> /etc/nix/nix.conf; fi
# Install direnv and nix-direnv
- if command -v nix > /dev/null; then nix-env -iA nixpkgs.direnv nixpkgs.nix-direnv; fi
# Allow .envrc
- if command -v nix > /dev/null; then direnv allow; fi
# Set CARGO_HOME to a cacheable path
- export CARGO_HOME="$(git rev-parse --show-toplevel)/.gitlab-ci.d/cargo"
ci:
stage: ci
image: nixos/nix:2.19.2
script:
- apk add openssh-client
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh && chmod 700 ~/.ssh
- printf "Host *\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config
- sh .gitlab/setup-buildx-remote-builders.sh
# Authorize against this project's own image registry:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Build multiplatform image and push to temporary tag:
- >
docker buildx build
--platform "linux/arm/v7,linux/arm64,linux/amd64"
--pull
--tag "$CI_REGISTRY_IMAGE/temporary-ci-images:$CI_JOB_ID"
--push
--provenance=false
--file "Dockerfile" .
# Build multiplatform image to deb stage and extract their .deb files:
- >
docker buildx build
--platform "linux/arm/v7,linux/arm64,linux/amd64"
--target "packager-result"
--output="type=local,dest=/tmp/build-output"
--provenance=false
--file "Dockerfile" .
# Build multiplatform image to binary stage and extract their binaries:
- >
docker buildx build
--platform "linux/arm/v7,linux/arm64,linux/amd64"
--target "builder-result"
--output="type=local,dest=/tmp/build-output"
--provenance=false
--file "Dockerfile" .
# Copy to GitLab container registry:
- >
docker buildx imagetools create
--tag "$CI_REGISTRY_IMAGE/$TAG"
--tag "$CI_REGISTRY_IMAGE/$TAG-bullseye"
--tag "$CI_REGISTRY_IMAGE/$TAG-commit-$CI_COMMIT_SHORT_SHA"
"$CI_REGISTRY_IMAGE/temporary-ci-images:$CI_JOB_ID"
# if DockerHub credentials exist, also copy to dockerhub:
- if [ -n "${DOCKER_HUB}" ]; then docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_PASSWORD" "$DOCKER_HUB"; fi
- >
if [ -n "${DOCKER_HUB}" ]; then
docker buildx imagetools create
--tag "$DOCKER_HUB_IMAGE/$TAG"
--tag "$DOCKER_HUB_IMAGE/$TAG-bullseye"
--tag "$DOCKER_HUB_IMAGE/$TAG-commit-$CI_COMMIT_SHORT_SHA"
"$CI_REGISTRY_IMAGE/temporary-ci-images:$CI_JOB_ID"
; fi
- mv /tmp/build-output ./
- direnv exec . engage
cache:
key: nix
paths:
- target
- .gitlab-ci.d
docker:
stage: artifacts
image: nixos/nix:2.19.2
script:
- nix build .#oci-image
# Make the output less difficult to find
- cp result docker-image.tar.gz
artifacts:
paths:
- "./build-output/"
- docker-image.tar.gz
docker:next:
extends: .docker-shared-settings
rules:
- if: '$BUILD_SERVER_SSH_PRIVATE_KEY && $CI_COMMIT_BRANCH == "next"'
variables:
TAG: "matrix-conduit:next"
docker:master:
extends: .docker-shared-settings
rules:
- if: '$BUILD_SERVER_SSH_PRIVATE_KEY && $CI_COMMIT_BRANCH == "master"'
variables:
TAG: "matrix-conduit:latest"
docker:tags:
extends: .docker-shared-settings
rules:
- if: "$BUILD_SERVER_SSH_PRIVATE_KEY && $CI_COMMIT_TAG"
variables:
TAG: "matrix-conduit:$CI_COMMIT_TAG"
docker build debugging:
extends: .docker-shared-settings
rules:
- if: "$CI_MERGE_REQUEST_TITLE =~ /.*[Dd]ocker.*/"
variables:
TAG: "matrix-conduit-docker-tests:latest"
# --------------------------------------------------------------------- #
# Run tests #
# --------------------------------------------------------------------- #
cargo check:
stage: test
image: docker.io/rust:1.75.0-bullseye
needs: []
interruptible: true
before_script:
- "rustup show && rustc --version && cargo --version" # Print version info for debugging
- apt-get update && apt-get -y --no-install-recommends install libclang-dev # dependency for rocksdb
debian:
stage: artifacts
image: rust:1.75.0
script:
- cargo check
- apt-get update && apt-get install -y --no-install-recommends libclang-dev
- cargo install cargo-deb
- cargo deb
.test-shared-settings:
stage: "test"
needs: []
image: "registry.gitlab.com/jfowl/conduit-containers/rust-with-tools:latest"
tags: ["docker"]
variables:
CARGO_INCREMENTAL: "false" # https://matklad.github.io/2021/09/04/fast-rust-builds.html#ci-workflow
interruptible: true
test:cargo:
extends: .test-shared-settings
before_script:
- apt-get update && apt-get -y --no-install-recommends install libclang-dev # dependency for rocksdb
script:
- rustc --version && cargo --version # Print version info for debugging
- "cargo test --color always --workspace --verbose --locked --no-fail-fast"
test:clippy:
extends: .test-shared-settings
before_script:
- rustup component add clippy
- apt-get update && apt-get -y --no-install-recommends install libclang-dev # dependency for rocksdb
script:
- rustc --version && cargo --version # Print version info for debugging
- "cargo clippy --color always --verbose --message-format=json | gitlab-report -p clippy > $CI_PROJECT_DIR/gl-code-quality-report.json"
# Make the output less difficult to find
- mv target/debian/*.deb .
artifacts:
when: always
reports:
codequality: gl-code-quality-report.json
test:format:
extends: .test-shared-settings
before_script:
- rustup component add rustfmt
script:
- cargo fmt --all -- --check
test:audit:
extends: .test-shared-settings
script:
- cargo audit --color always || true
- cargo audit --stale --json | gitlab-report -p audit > gl-sast-report.json
artifacts:
when: always
reports:
sast: gl-sast-report.json
test:dockerlint:
stage: "test"
needs: []
image: "ghcr.io/hadolint/hadolint@sha256:6c4b7c23f96339489dd35f21a711996d7ce63047467a9a562287748a03ad5242" # 2.8.0-alpine
interruptible: true
script:
- hadolint --version
# First pass: Print for CI log:
- >
hadolint
--no-fail --verbose
./Dockerfile
# Then output the results into a json for GitLab to pretty-print this in the MR:
- >
hadolint
--format gitlab_codeclimate
--failure-threshold error
./Dockerfile > dockerlint.json
artifacts:
when: always
reports:
codequality: dockerlint.json
paths:
- dockerlint.json
rules:
- if: '$CI_COMMIT_REF_NAME != "master"'
changes:
- docker/*Dockerfile
- Dockerfile
- .gitlab-ci.yml
- if: '$CI_COMMIT_REF_NAME == "master"'
- if: '$CI_COMMIT_REF_NAME == "next"'
- "*.deb"
cache:
key: debian
paths:
- target
- .gitlab-ci.d
# --------------------------------------------------------------------- #
# Store binaries as package so they have download urls #
# --------------------------------------------------------------------- #
# DISABLED FOR NOW, NEEDS TO BE FIXED AT A LATER TIME:
#publish:package:
# stage: "upload artifacts"
# needs:
# - "docker:tags"
# rules:
# - if: "$CI_COMMIT_TAG"
# image: curlimages/curl:latest
# tags: ["docker"]
# variables:
# GIT_STRATEGY: "none" # Don't need a clean copy of the code, we just operate on artifacts
# script:
# - 'BASE_URL="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/conduit-${CI_COMMIT_REF_SLUG}/build-${CI_PIPELINE_ID}"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_amd64/conduit "${BASE_URL}/conduit-x86_64-unknown-linux-gnu"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_arm_v7/conduit "${BASE_URL}/conduit-armv7-unknown-linux-gnu"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_arm64/conduit "${BASE_URL}/conduit-aarch64-unknown-linux-gnu"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_amd64/conduit.deb "${BASE_URL}/conduit-x86_64-unknown-linux-gnu.deb"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_arm_v7/conduit.deb "${BASE_URL}/conduit-armv7-unknown-linux-gnu.deb"'
# - 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file build-output/linux_arm64/conduit.deb "${BASE_URL}/conduit-aarch64-unknown-linux-gnu.deb"'
# Avoid duplicate pipelines
# See: https://docs.gitlab.com/ee/ci/yaml/workflow.html#switch-between-branch-pipelines-and-merge-request-pipelines
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: "$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS"
when: never
- if: "$CI_COMMIT_BRANCH"
- if: "$CI_COMMIT_TAG"

View file

@ -3,6 +3,6 @@
-----------------------------------------------------------------------------
- [ ] I ran `cargo fmt` and `cargo test`
- [ ] I ran `cargo fmt`, `cargo clippy`, and `cargo test`
- [ ] I agree to release my code and all other changes of this MR under the Apache-2.0 license

118
Cargo.lock generated
View file

@ -58,9 +58,9 @@ checksum = "bddcadddf5e9015d310179a59bb28c4d4b9920ad0f11e8e14dbadf654890c9a6"
[[package]]
name = "argon2"
version = "0.5.2"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17ba4cac0a46bc1d2912652a751c47f2a9f3a7fe89bcae2275d418f5270402f9"
checksum = "3c3610892ee6e0cbce8ae2700349fcf8f98adb0dbfbee85aec3c9179d29cc072"
dependencies = [
"base64ct",
"blake2",
@ -400,7 +400,7 @@ dependencies = [
"hyper",
"hyperlocal",
"image",
"js_option",
"ipaddress",
"jsonwebtoken",
"lazy_static",
"lru-cache",
@ -682,9 +682,9 @@ checksum = "27573eac26f4dd11e2b1916c3fe1baa56407c83c71a773a8ba17ec0bca03b6b7"
[[package]]
name = "figment"
version = "0.10.13"
version = "0.10.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7629b8c7bcd214a072c2c88b263b5bb3ceb54c34365d8c41c1665461aeae0993"
checksum = "2b6e5bc7bd59d60d0d45a6ccab6cf0f4ce28698fb4e81e750ddf229c9b824026"
dependencies = [
"atomic",
"pear",
@ -865,9 +865,9 @@ dependencies = [
[[package]]
name = "hashlink"
version = "0.8.4"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8094feaf31ff591f651a2664fb9cfd92bba7a60ce3197265e9482ebe753c8f7"
checksum = "692eaaf7f7607518dd3cef090f1474b61edc5301d8012f09579920df68b725ee"
dependencies = [
"hashbrown",
]
@ -1081,6 +1081,20 @@ version = "3.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8bb03732005da905c88227371639bf1ad885cc712789c011c31c5fb3ab3ccf02"
[[package]]
name = "ipaddress"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "957bb9f3645d6bb7f36df99d5105b4866aa79749819d7c176a170a27dc477cbf"
dependencies = [
"lazy_static",
"libc",
"num",
"num-integer",
"num-traits",
"regex",
]
[[package]]
name = "ipconfig"
version = "0.3.2"
@ -1233,7 +1247,7 @@ dependencies = [
[[package]]
name = "librocksdb-sys"
version = "0.16.0+8.10.0"
source = "git+https://github.com/rust-rocksdb/rust-rocksdb?rev=8fccdf5473e3e75a5ce0f42e5ff5e89c2012305b#8fccdf5473e3e75a5ce0f42e5ff5e89c2012305b"
source = "git+https://github.com/rust-rocksdb/rust-rocksdb?rev=1fb26dd5dc363c9fded526bac45366a436fc50a9#1fb26dd5dc363c9fded526bac45366a436fc50a9"
dependencies = [
"bindgen",
"bzip2-sys",
@ -1249,8 +1263,7 @@ dependencies = [
[[package]]
name = "libsqlite3-sys"
version = "0.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf4e226dcd58b4be396f7bd3c20da8fdee2911400705297ba7d2d7cc2c30f716"
source = "git+https://github.com/rusqlite/rusqlite?rev=ccfbc28ae1edc3090fb1b331fdc145f052ec73b9#ccfbc28ae1edc3090fb1b331fdc145f052ec73b9"
dependencies = [
"cc",
"pkg-config",
@ -1406,6 +1419,20 @@ dependencies = [
"winapi",
]
[[package]]
name = "num"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05180d69e3da0e530ba2a1dae5110317e49e3b7f3d41be227dc5f92e49ee7af"
dependencies = [
"num-bigint",
"num-complex",
"num-integer",
"num-iter",
"num-rational",
"num-traits",
]
[[package]]
name = "num-bigint"
version = "0.4.4"
@ -1417,6 +1444,15 @@ dependencies = [
"num-traits",
]
[[package]]
name = "num-complex"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ba157ca0885411de85d6ca030ba7e2a83a28636056c7c699b07c8b6f7383214"
dependencies = [
"num-traits",
]
[[package]]
name = "num-integer"
version = "0.1.45"
@ -1427,6 +1463,29 @@ dependencies = [
"num-traits",
]
[[package]]
name = "num-iter"
version = "0.1.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d03e6c028c5dc5cac6e2dec0efda81fc887605bb3d884578bb6d6bf7514e252"
dependencies = [
"autocfg",
"num-integer",
"num-traits",
]
[[package]]
name = "num-rational"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0638a1c9d0a3c0914158145bc76cff373a75a627e6ecbfb71cbe6f453a5a19b0"
dependencies = [
"autocfg",
"num-bigint",
"num-integer",
"num-traits",
]
[[package]]
name = "num-traits"
version = "0.2.17"
@ -1823,13 +1882,13 @@ dependencies = [
[[package]]
name = "regex"
version = "1.10.2"
version = "1.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "380b951a9c5e80ddfd6136919eef32310721aa4aacd4889a8d39124b026ab343"
checksum = "b62dbe01f0b06f9d8dc7d49e05a0785f153b00b2c227856282f671e0318c9b15"
dependencies = [
"aho-corasick",
"memchr",
"regex-automata 0.4.3",
"regex-automata 0.4.4",
"regex-syntax 0.8.2",
]
@ -1844,9 +1903,9 @@ dependencies = [
[[package]]
name = "regex-automata"
version = "0.4.3"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f804c7828047e88b2d32e2d7fe5a105da8ee3264f01902f796c8e067dc2483f"
checksum = "3b7fa1134405e2ec9353fd416b17f8dacd46c473d7d3fd1cf202706a14eb792a"
dependencies = [
"aho-corasick",
"memchr",
@ -1933,7 +1992,7 @@ dependencies = [
[[package]]
name = "rocksdb"
version = "0.21.0"
source = "git+https://github.com/rust-rocksdb/rust-rocksdb?rev=8fccdf5473e3e75a5ce0f42e5ff5e89c2012305b#8fccdf5473e3e75a5ce0f42e5ff5e89c2012305b"
source = "git+https://github.com/rust-rocksdb/rust-rocksdb?rev=1fb26dd5dc363c9fded526bac45366a436fc50a9#1fb26dd5dc363c9fded526bac45366a436fc50a9"
dependencies = [
"libc",
"librocksdb-sys",
@ -1942,7 +2001,7 @@ dependencies = [
[[package]]
name = "ruma"
version = "0.9.4"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"assign",
"js_int",
@ -1961,7 +2020,7 @@ dependencies = [
[[package]]
name = "ruma-appservice-api"
version = "0.9.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"js_int",
"ruma-common",
@ -1973,7 +2032,7 @@ dependencies = [
[[package]]
name = "ruma-client-api"
version = "0.17.4"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"as_variant",
"assign",
@ -1992,7 +2051,7 @@ dependencies = [
[[package]]
name = "ruma-common"
version = "0.12.1"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"as_variant",
"base64",
@ -2020,7 +2079,7 @@ dependencies = [
[[package]]
name = "ruma-events"
version = "0.27.11"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"as_variant",
"indexmap",
@ -2042,7 +2101,7 @@ dependencies = [
[[package]]
name = "ruma-federation-api"
version = "0.8.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"js_int",
"ruma-common",
@ -2054,7 +2113,7 @@ dependencies = [
[[package]]
name = "ruma-identifiers-validation"
version = "0.9.3"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"js_int",
"thiserror",
@ -2063,7 +2122,7 @@ dependencies = [
[[package]]
name = "ruma-identity-service-api"
version = "0.8.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"js_int",
"ruma-common",
@ -2073,7 +2132,7 @@ dependencies = [
[[package]]
name = "ruma-macros"
version = "0.12.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"once_cell",
"proc-macro-crate",
@ -2088,7 +2147,7 @@ dependencies = [
[[package]]
name = "ruma-push-gateway-api"
version = "0.8.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"js_int",
"ruma-common",
@ -2100,7 +2159,7 @@ dependencies = [
[[package]]
name = "ruma-signatures"
version = "0.14.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"base64",
"ed25519-dalek",
@ -2116,7 +2175,7 @@ dependencies = [
[[package]]
name = "ruma-state-res"
version = "0.10.0"
source = "git+https://github.com/ruma/ruma?rev=aa3acd88d21dfbb7595f54e619f52761bcb0259e#aa3acd88d21dfbb7595f54e619f52761bcb0259e"
source = "git+https://github.com/ruma/ruma?rev=684ffc789877355bd25269451b2356817c17cc3f#684ffc789877355bd25269451b2356817c17cc3f"
dependencies = [
"itertools",
"js_int",
@ -2131,8 +2190,7 @@ dependencies = [
[[package]]
name = "rusqlite"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a78046161564f5e7cd9008aff3b2990b3850dc8e0349119b98e8f251e099f24d"
source = "git+https://github.com/rusqlite/rusqlite?rev=ccfbc28ae1edc3090fb1b331fdc145f052ec73b9#ccfbc28ae1edc3090fb1b331fdc145f052ec73b9"
dependencies = [
"bitflags 2.4.2",
"fallible-iterator",

View file

@ -26,7 +26,7 @@ tower-http = { version = "0.4.4", features = ["add-extension", "cors", "sensitiv
# Used for matrix spec type definitions and helpers
#ruma = { version = "0.4.0", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-pre-spec", "unstable-exhaustive-types"] }
ruma = { git = "https://github.com/ruma/ruma", rev = "aa3acd88d21dfbb7595f54e619f52761bcb0259e", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-msc2448", "unstable-msc3575", "unstable-exhaustive-types", "ring-compat", "unstable-unspecified", "unstable-msc2870", "unstable-msc3061", "unstable-extensible-events"] }
ruma = { git = "https://github.com/ruma/ruma", rev = "684ffc789877355bd25269451b2356817c17cc3f", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-msc2448", "unstable-msc3575", "unstable-exhaustive-types", "ring-compat", "unstable-unspecified", "unstable-msc2870", "unstable-msc3061", "unstable-extensible-events"] }
#ruma = { git = "https://github.com/timokoesters/ruma", rev = "4ec9c69bb7e09391add2382b3ebac97b6e8f4c64", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-msc2448", "unstable-msc3575", "unstable-exhaustive-types", "ring-compat", "unstable-unspecified" ] }
#ruma = { path = "../ruma/crates/ruma", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-msc2448", "unstable-msc3575", "unstable-exhaustive-types", "ring-compat", "unstable-unspecified" ] }
@ -51,7 +51,7 @@ serde = { version = "1.0.195", features = ["rc"] }
# Used for secure identifiers
rand = "0.8.5"
# Used to hash passwords
argon2 = "0.5"
argon2 = "0.5.3"
reqwest = { version = "0.11.23", default-features = false, features = ["rustls-tls-native-roots", "socks"] }
# Used for conduit::Error type
thiserror = "1.0.56"
@ -64,7 +64,7 @@ ring = "0.17.7"
# Used when querying the SRV record of other servers
trust-dns-resolver = "0.23.2"
# Used to find matching events for appservices
regex = "1.10.2"
regex = "1.10.3"
# jwt jsonwebtokens
jsonwebtoken = "9.2.0"
# Performance measurements
@ -76,14 +76,14 @@ opentelemetry_sdk = { version = "0.21.2", features = ["rt-tokio"] }
opentelemetry-jaeger = { version = "0.20.0", features = ["rt-tokio"] }
tracing-opentelemetry = "0.22.0"
lru-cache = "0.1.2"
rusqlite = { version = "0.30.0", optional = true, features = ["bundled"] }
rusqlite = { git = "https://github.com/rusqlite/rusqlite", rev = "ccfbc28ae1edc3090fb1b331fdc145f052ec73b9", optional = true, features = ["bundled"] }
parking_lot = { version = "0.12.1", optional = true }
num_cpus = "1.16.0"
threadpool = "1.8.1"
# Used for ruma wrapper
serde_html_form = "0.2.3"
rocksdb = { git = "https://github.com/rust-rocksdb/rust-rocksdb", rev = "8fccdf5473e3e75a5ce0f42e5ff5e89c2012305b", default-features = false, features = ["multi-threaded-cf", "snappy", "lz4", "zstd"], optional = true }
rocksdb = { git = "https://github.com/rust-rocksdb/rust-rocksdb", rev = "1fb26dd5dc363c9fded526bac45366a436fc50a9", default-features = false, features = ["multi-threaded-cf", "snappy", "lz4", "zstd"], optional = true }
thread_local = "1.1.7"
# used for TURN server authentication
@ -94,17 +94,17 @@ sha2 = { version = "0.10.8" }
clap = { version = "4.4.17", default-features = false, features = ["std", "derive", "help", "usage", "error-context"] }
futures-util = { version = "0.3.30", default-features = false }
# Used for reading the configuration from conduit.toml & environment variables
figment = { version = "0.10.13", features = ["env", "toml"] }
figment = { version = "0.10.14", features = ["env", "toml"] }
tikv-jemalloc-ctl = { version = "0.5.0", features = ["use_std"], optional = true }
tikv-jemallocator = { version = "0.5.0", features = ["unprefixed_malloc_on_supported_platforms"], optional = true }
lazy_static = "1.4.0"
async-trait = "0.1.77"
sd-notify = { version = "0.4.1", optional = true }
# used for checking if an IP is in specific subnets / CIDR ranges
ipaddress = "0.1.3"
# stupid ruma using JsOption instead of Option
js_option = "0.1"
sd-notify = { version = "0.4.1", optional = true }
[target.'cfg(unix)'.dependencies]
nix = { version = "0.27.1", features = ["resource"] }

View file

@ -1,45 +1,18 @@
# Deploying Conduit
### Please note that this documentation is not fully representative of conduwuit at the moment. Assume majority of it is outdated.
> ## Getting help
>
> If you run into any problems while setting up Conduit, write an email to `conduit@koesters.xyz`, ask us
> in `#conduit:fachschaften.org` or [open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new).
> If you run into any problems while setting up conduwuit, ask us
> in `#conduwuit:puppygock.gay` or [open an issue on GitHub](https://github.com/girlbossceo/conduwuit/issues/new).
## Installing Conduit
## Installing conduwuit
Although you might be able to compile Conduit for Windows, we do recommend running it on a Linux server. We therefore
Although you might be able to compile conduwuit for Windows, we only support running it on a Linux server. We therefore
only offer Linux binaries.
You may simply download the binary that fits your machine. Run `uname -m` to see what you need. Now copy the appropriate url:
| CPU Architecture | Download stable version | Download development version |
| ------------------------------------------- | --------------------------------------------------------------- | ----------------------------------------------------------- |
| x84_64 / amd64 (Most servers and computers) | [Binary][x84_64-glibc-master] / [.deb][x84_64-glibc-master-deb] | [Binary][x84_64-glibc-next] / [.deb][x84_64-glibc-next-deb] |
| armv7 (e.g. Raspberry Pi by default) | [Binary][armv7-glibc-master] / [.deb][armv7-glibc-master-deb] | [Binary][armv7-glibc-next] / [.deb][armv7-glibc-next-deb] |
| armv8 / aarch64 | [Binary][armv8-glibc-master] / [.deb][armv8-glibc-master-deb] | [Binary][armv8-glibc-next] / [.deb][armv8-glibc-next-deb] |
These builds were created on and linked against the glibc version shipped with Debian bullseye.
If you use a system with an older glibc version (e.g. RHEL8), you might need to compile Conduit yourself.
[x84_64-glibc-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_amd64/conduit?job=docker:master
[armv7-glibc-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_arm_v7/conduit?job=docker:master
[armv8-glibc-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_arm64/conduit?job=docker:master
[x84_64-glibc-next]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_amd64/conduit?job=docker:next
[armv7-glibc-next]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_arm_v7/conduit?job=docker:next
[armv8-glibc-next]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_arm64/conduit?job=docker:next
[x84_64-glibc-master-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_amd64/conduit.deb?job=docker:master
[armv7-glibc-master-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_arm_v7/conduit.deb?job=docker:master
[armv8-glibc-master-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/build-output/linux_arm64/conduit.deb?job=docker:master
[x84_64-glibc-next-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_amd64/conduit.deb?job=docker:next
[armv7-glibc-next-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_arm_v7/conduit.deb?job=docker:next
[armv8-glibc-next-deb]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/next/raw/build-output/linux_arm64/conduit.deb?job=docker:next
```bash
$ sudo wget -O /usr/local/bin/matrix-conduit <url>
$ sudo chmod +x /usr/local/bin/matrix-conduit
```
Alternatively, you may compile the binary yourself. First, install any dependencies:
conduwuit does not offer prebuilt binaries for now. Building it yourself is the only supported method.
```bash
# Debian
@ -135,56 +108,11 @@ $ sudo systemctl daemon-reload
## Creating the Conduit configuration file
Now we need to create the Conduit's config file in `/etc/matrix-conduit/conduit.toml`. Paste this in **and take a moment
Now we need to create the Conduit's config file in `/etc/conduwuit/conduwuit.toml`. Paste this in **and take a moment
to read it. You need to change at least the server name.**
You can also choose to use a different database backend, but right now only `rocksdb` and `sqlite` are recommended.
```toml
[global]
# The server_name is the pretty name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs all /_matrix/ requests to be reachable at
# https://your.server.name/ on port 443 (client-server) and 8448 (federation).
# If that's not possible for you, you can create /.well-known files to redirect
# requests. See
# https://matrix.org/docs/spec/client_server/latest#get-well-known-matrix-client
# and
# https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information
# YOU NEED TO EDIT THIS
#server_name = "your.server.name"
# This is the only directory where Conduit will save its data
database_path = "/var/lib/matrix-conduit/"
database_backend = "rocksdb"
# The port Conduit will be running on. You need to set up a reverse proxy in
# your web server (e.g. apache or nginx), so all requests to /_matrix on port
# 443 and 8448 will be forwarded to the Conduit instance running on this port
# Docker users: Don't change this, you'll need to map an external port to this.
port = 6167
# Max size for uploads
max_request_size = 20_000_000 # in bytes
# Enables registration. If set to false, no users can register on this server.
allow_registration = true
allow_federation = true
allow_check_for_updates = true
# Server to get public keys from. You probably shouldn't change this
trusted_servers = ["matrix.org"]
#max_concurrent_requests = 100 # How many requests Conduit sends to other servers at the same time
#log = "warn,state_res=warn,rocket=off,_=off"
address = "127.0.0.1" # This makes sure Conduit can only be reached using the reverse proxy
#address = "0.0.0.0" # If Conduit is running in a container, make sure the reverse proxy (ie. Traefik) can reach it.
```
See the following example config at [conduwuit-example.toml](conduwuit-example.toml)
## Setting the correct file permissions

View file

@ -1,9 +1,7 @@
### list of features, bug fixes, etc that conduwuit does that upstream does not:
- Has a working CI/CD for tests, codebase warnings (rustc and clippy), caching, and build (still need to output artifacts with build variants)
- Fixed every single clippy (default lints) and rustc warnings, including some that were performance related or potential safety issues / unsoundness
- Has dependabot and significantly updates all dependencies possible
- Uses upstream reqwest instead of super old fork (via upstream MR)
- Uses proper argon2 crate instead of questionable rust-argon2 crate
- Improved and cleaned up logging (less noisy dead server logging, registration attempts, more useful troubleshooting logging, etc)
- Attempts and interest in removing extreme and unnecessary panics/unwraps/expects that can lead to denial of service or such (upstream and upstream contributors want this unusual behaviour for some reason)
@ -21,7 +19,6 @@
- conduwuit allows MXIDs with `+` in them (thanks to Ruma update)
- Revamped admin room infrastructure and commands (via upstream MR)
- Make spaces/hierarchy cache use cache_capacity_modifier instead of hardcoded small value
- Send missing push notifications on invitations (via upstream MR)
- Make PDU appending, building, etc asynchronous
- Add *optional* feature flag to use SHA256 key names for media instead of base64 to overcome filesystem file name length limitations (OS error file name too long) (via upstream MR)
- Add *optional* feature flag to enable zstd HTTP body compression
@ -53,4 +50,13 @@
- Generate passwords with 25 characters instead of 15
- Add missing `reason` field to user ban events (`/ban`)
- For all [`/report`](https://spec.matrix.org/v1.9/client-server-api/#post_matrixclientv3roomsroomidreporteventid) requests: check if the reported event ID belongs to the reported room ID, raise report reasoning character limit to 750, fix broken formatting, make a small delayed random response per spec suggestion on privacy, and check if the sender user is in the reported room.
- Support blocking servers from downloading remote media from
- Support blocking servers from downloading remote media from
- Support sending `well_known` response to client logins if using config option `well_known_client`
- Send `avatar_url` on invite room membership events/changes
- Revamp example config, adding a lot of config options available (still some missing)
- Return joined member count of rooms for push rules/conditions instead of a hardcoded value of 10
- Respect *most* client parameters for `/media/` requests (`allow_redirect` still needs work)
- Config option `ip_range_denylist` to support refusing to send requests (typically federation) to specific IP ranges, typically RFC 1918, non-routable, testnet, etc addresses like Synapse for security.
- Support for creating rooms with custom room IDs like Maunium Synapse (`room_id` request body field to `/createRoom`)
- Assume well-knowns are broken if they exceed past 15000 characters.
- Basic validation/checks on user-specified room aliases and custom room ID creations

View file

@ -1,132 +0,0 @@
# syntax=docker/dockerfile:1
FROM docker.io/rust:1.75.0-bullseye AS base
FROM base AS builder
WORKDIR /usr/src/conduit
# Install required packages to build Conduit and it's dependencies
RUN apt-get update && \
apt-get -y --no-install-recommends install libclang-dev=1:11.0-51+nmu5
# == Build dependencies without our own code separately for caching ==
#
# Need a fake main.rs since Cargo refuses to build anything otherwise.
#
# See https://github.com/rust-lang/cargo/issues/2644 for a Cargo feature
# request that would allow just dependencies to be compiled, presumably
# regardless of whether source files are available.
RUN mkdir src && touch src/lib.rs && echo 'fn main() {}' > src/main.rs
COPY Cargo.toml Cargo.lock ./
RUN cargo build --release && rm -r src
# Copy over actual Conduit sources
COPY src src
# main.rs and lib.rs need their timestamp updated for this to work correctly since
# otherwise the build with the fake main.rs from above is newer than the
# source files (COPY preserves timestamps).
#
# Builds conduit and places the binary at /usr/src/conduit/target/release/conduit
RUN touch src/main.rs && touch src/lib.rs && cargo build --release
# ONLY USEFUL FOR CI: target stage to extract build artifacts
FROM scratch AS builder-result
COPY --from=builder /usr/src/conduit/target/release/conduit /conduit
# ---------------------------------------------------------------------------------------------------------------
# Build cargo-deb, a tool to package up rust binaries into .deb packages for Debian/Ubuntu based systems:
# ---------------------------------------------------------------------------------------------------------------
FROM base AS build-cargo-deb
RUN apt-get update && \
apt-get install -y --no-install-recommends \
dpkg \
dpkg-dev \
liblzma-dev
RUN cargo install cargo-deb
# => binary is in /usr/local/cargo/bin/cargo-deb
# ---------------------------------------------------------------------------------------------------------------
# Package conduit build-result into a .deb package:
# ---------------------------------------------------------------------------------------------------------------
FROM builder AS packager
WORKDIR /usr/src/conduit
COPY ./LICENSE ./LICENSE
COPY ./README.md ./README.md
COPY debian ./debian
COPY --from=build-cargo-deb /usr/local/cargo/bin/cargo-deb /usr/local/cargo/bin/cargo-deb
# --no-build makes cargo-deb reuse already compiled project
RUN cargo deb --no-build
# => Package is in /usr/src/conduit/target/debian/<project_name>_<version>_<arch>.deb
# ONLY USEFUL FOR CI: target stage to extract build artifacts
FROM scratch AS packager-result
COPY --from=packager /usr/src/conduit/target/debian/*.deb /conduit.deb
# ---------------------------------------------------------------------------------------------------------------
# Stuff below this line actually ends up in the resulting docker image
# ---------------------------------------------------------------------------------------------------------------
FROM docker.io/debian:bullseye-slim AS runner
# Standard port on which Conduit launches.
# You still need to map the port when using the docker command or docker-compose.
EXPOSE 6167
ARG DEFAULT_DB_PATH=/var/lib/matrix-conduit
ENV CONDUIT_PORT=6167 \
CONDUIT_ADDRESS="0.0.0.0" \
CONDUIT_DATABASE_PATH=${DEFAULT_DB_PATH} \
CONDUIT_CONFIG=''
# └─> Set no config file to do all configuration with env vars
# Conduit needs:
# dpkg: to install conduit.deb
# ca-certificates: for https
# iproute2 & wget: for the healthcheck script
RUN apt-get update && apt-get -y --no-install-recommends install \
dpkg \
ca-certificates \
iproute2 \
wget \
&& rm -rf /var/lib/apt/lists/*
# Test if Conduit is still alive, uses the same endpoint as Element
COPY ./docker/healthcheck.sh /srv/conduit/healthcheck.sh
HEALTHCHECK --start-period=5s --interval=5s CMD ./healthcheck.sh
# Install conduit.deb:
COPY --from=packager /usr/src/conduit/target/debian/*.deb /srv/conduit/
RUN dpkg -i /srv/conduit/*.deb
# Improve security: Don't run stuff as root, that does not need to run as root
# Most distros also use 1000:1000 for the first real user, so this should resolve volume mounting problems.
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN set -x ; \
groupadd -r -g ${GROUP_ID} conduit ; \
useradd -l -r -M -d /srv/conduit -o -u ${USER_ID} -g conduit conduit && exit 0 ; exit 1
# Create database directory, change ownership of Conduit files to conduit user and group and make the healthcheck executable:
RUN chown -cR conduit:conduit /srv/conduit && \
chmod +x /srv/conduit/healthcheck.sh && \
mkdir -p ${DEFAULT_DB_PATH} && \
chown -cR conduit:conduit ${DEFAULT_DB_PATH}
# Change user to conduit, no root permissions afterwards:
USER conduit
# Set container home directory
WORKDIR /srv/conduit
# Run Conduit and print backtraces on panics
ENV RUST_BACKTRACE=1
ENTRYPOINT [ "/usr/sbin/matrix-conduit" ]

View file

@ -1,97 +0,0 @@
# =============================================================================
# This is the official example config for Conduit.
# If you use it for your server, you will need to adjust it to your own needs.
# At the very least, change the server_name field!
# =============================================================================
[global]
# The server_name is the pretty name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs all /_matrix/ requests to be reachable at
# https://your.server.name/ on port 443 (client-server) and 8448 (federation).
# If that's not possible for you, you can create /.well-known files to redirect
# requests. See
# https://matrix.org/docs/spec/client_server/latest#get-well-known-matrix-client
# and
# https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information
# YOU NEED TO EDIT THIS
#server_name = "your.server.name"
# This is the only directory where Conduit will save its data
database_path = "/var/lib/matrix-conduit/"
database_backend = "rocksdb"
# The port Conduit will be running on. You need to set up a reverse proxy in
# your web server (e.g. apache or nginx), so all requests to /_matrix on port
# 443 and 8448 will be forwarded to the Conduit instance running on this port
# Docker users: Don't change this, you'll need to map an external port to this.
port = 6167
# Max size for uploads
max_request_size = 20_000_000 # in bytes
# Enables open registration. If set to false, no users can register on this
# server (unless a token is configured).
# If set to true, users can register with no form of 2nd step only if you set
# `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse` to
# in your config. If you would like
# registration only via token reg, please set this to *false* and configure the
# `registration_token` key.
allow_registration = false
# A static registration token that new users will have to provide when creating
# an account. If unset and `allow_registration` is true, registration is open
# without any condition. YOU NEED TO EDIT THIS.
registration_token = "change this token for something specific to your server"
allow_federation = true
allow_check_for_updates = true
# Enable the display name lightning bolt on registration.
enable_lightning_bolt = true
# Servers listed here will be used to gather public keys of other servers.
# Generally, copying this exactly should be enough. (Currently, Conduit doesn't
# support batched key requests, so this list should only contain Synapse
# servers.)
trusted_servers = ["matrix.org"]
#max_concurrent_requests = 400 # How many requests Conduit sends to other servers at the same time
#log = "warn,state_res=warn"
address = "127.0.0.1" # This makes sure Conduit can only be reached using the reverse proxy
#address = "0.0.0.0" # If Conduit is running in a container, make sure the reverse proxy (ie. Traefik) can reach it.
# Set this to true to allow your server's public room directory to be federated.
# Set this to false to protect against /publicRooms spiders, but will forbid external users from viewing your server's public room directory.
# If federation is disabled entirely (`allow_federation`), this is inherently false.
allow_public_room_directory_over_federation = false
# Set this to true to allow your server's public room directory to be queried without client authentication (access token) through the Client APIs.
# Set this to false to protect against /publicRooms spiders.
allow_public_room_directory_without_auth = false
# Set this to true to allow federating device display names / allow external users to see your device display name.
# If federation is disabled entirely (`allow_federation`), this is inherently false.
allow_device_name_federation = false
# Uncomment unix_socket_path to listen on a UNIX socket at the specified path.
# If listening on a UNIX socket, you must remove the 'address' key if defined and add your
# reverse proxy (nginx/Caddy/Apache/etc) to the 'conduit' group, unless world RW
# permissions are specified with unix_socket_perms (666 minimum).
#unix_socket_path = "/run/conduit/conduit.sock"
#unix_socket_perms = 660
# Set this to true for Conduit to compress HTTP response bodies using zstd.
# Please be aware that enabling HTTP compression may introduce compression side-channel attacks and response body tampering to potentially defeat TLS.
# Most users should not need to enable this.
# See https://breachattack.com/ and https://wikipedia.org/wiki/BREACH before deciding to enable this.
zstd_compression = false
# Set to true to allow user type "guest" registrations
allow_guest_registration = false

242
conduwuit-example.toml Normal file
View file

@ -0,0 +1,242 @@
# =============================================================================
# This is the official example config for conduwuit.
# If you use it for your server, you will need to adjust it to your own needs.
# At the very least, change the server_name field!
# =============================================================================
[global]
# The server_name is the pretty name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs all /_matrix/ requests to be reachable at
# https://your.server.name/ on port 443 (client-server) and 8448 (federation).
# If that's not possible for you, you can create /.well-known files to redirect
# requests (delegation). See
# https://spec.matrix.org/latest/client-server-api/#getwell-knownmatrixclient
# and
# https://spec.matrix.org/v1.9/server-server-api/#getwell-knownmatrixserver
# for more information
# YOU NEED TO EDIT THIS
#server_name = "your.server.name"
# Servers listed here will be used to gather public keys of other servers.
# Generally, copying this exactly should be enough. (Currently, conduwuit doesn't
# support batched key requests, so this list should only contain Synapse
# servers.) Defaults to `matrix.org`
# trusted_servers = ["matrix.org"]
### Database configuration
# This is the only directory where conduwuit will save its data, including media
database_path = "/var/lib/conduwuit/"
# Database backend: Only rocksdb and sqlite are supported. Please note that sqlite
# will perform significantly worse than rocksdb as it is not intended to be used the
# way it is by conduwuit. sqlite only exists for historical reasons.
database_backend = "rocksdb"
### Network
# The port conduwuit will be running on. You need to set up a reverse proxy such as
# Caddy or Nginx so all requests to /_matrix on port 443 and 8448 will be
# forwarded to the conduwuit instance running on this port
# Docker users: Don't change this, you'll need to map an external port to this.
port = 6167
# default address (IPv4 or IPv6) conduwuit will listen on. Generally you want this to be
# localhost (127.0.0.1 / ::1). If you are using Docker or a container NAT networking setup, you
# likely need this to be 0.0.0.0.
address = "127.0.0.1"
# How many requests conduwuit sends to other servers at the same time. Default is 100
# Note that because conduwuit is very fast unlike other homeserver implementations,
# setting this too high could inadvertently result in ratelimits kicking in, or
# overloading lower-end homeservers out there. Recommended to leave this alone unless you
# have a valid reason to. No this will not speed up room joins.
#max_concurrent_requests = 100
# Max request size for file uploads
max_request_size = 20_000_000 # in bytes
# Uncomment unix_socket_path to listen on a UNIX socket at the specified path.
# If listening on a UNIX socket, you must remove/comment the 'address' key if defined and add your
# reverse proxy to the 'conduwuit' group, unless world RW permissions are specified with unix_socket_perms (666 minimum).
#unix_socket_path = "/run/conduwuit/conduwuit.sock"
#unix_socket_perms = 660
# Set this to true for conduwuit to compress HTTP response bodies using zstd.
# Please be aware that enabling HTTP compression may weaken or even defeat TLS.
# Most users should not need to enable this.
# See https://breachattack.com/ and https://wikipedia.org/wiki/BREACH before deciding to enable this.
zstd_compression = false
# Vector list of IPv4 and IPv6 CIDR ranges / subnets *in quotes* that you do not want conduwuit to send outbound requests to.
# Defaults to RFC1918, unroutable, loopback, multicast, and testnet addresses for security.
#
# To disable, set this to be an empty vector (`[]`).
#
# Currently this does not account for proxies in use like Synapse does.
ip_range_denylist = [
"127.0.0.0/8",
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
"100.64.0.0/10",
"192.0.0.0/24",
"169.254.0.0/16",
"192.88.99.0/24",
"198.18.0.0/15",
"192.0.2.0/24",
"198.51.100.0/24",
"203.0.113.0/24",
"224.0.0.0/4",
"::1/128",
"fe80::/10",
"fc00::/7",
"2001:db8::/32",
"ff00::/8",
"fec0::/10",
]
### Moderation / Privacy / Security
# Set to true to allow user type "guest" registrations. Element attempts to register guest users automatically.
# For private homeservers, this is best at false.
allow_guest_registration = false
# Vector list of servers that conduwuit will refuse to download remote media from.
# No default.
# prevent_media_downloads_from = ["example.com", "example.local"]
# Enables open registration. If set to false, no users can register on this
# server.
# If set to true without a token configured, users can register with no form of 2nd-
# step only if you set
# `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse` to
# true in your config. If you would like
# registration only via token reg, please configure the `registration_token` key.
allow_registration = false
# Please note that an open registration homeserver with no second-step verification
# is highly prone to abuse and potential defederation by homeservers, including
# matrix.org.
# A static registration token that new users will have to provide when creating
# an account. If unset and `allow_registration` is true, registration is open
# without any condition. YOU NEED TO EDIT THIS.
registration_token = "change this token for something specific to your server"
# controls whether federation is allowed or not
# defaults to false
allow_federation = true
# controls whether users are allowed to create rooms.
# appservices and admins are always allowed to create rooms
# defaults to true
# allow_room_creation = true
# Set this to true to allow your server's public room directory to be federated.
# Set this to false to protect against /publicRooms spiders, but will forbid external users
# from viewing your server's public room directory. If federation is disabled entirely
# (`allow_federation`), this is inherently false.
allow_public_room_directory_over_federation = false
# Set this to true to allow your server's public room directory to be queried without client
# authentication (access token) through the Client APIs. Set this to false to protect against /publicRooms spiders.
allow_public_room_directory_without_auth = false
# Set this to true to allow federating device display names / allow external users to see your device display name.
# If federation is disabled entirely (`allow_federation`), this is inherently false. For privacy, this is best disabled.
allow_device_name_federation = false
### Misc
# max log level for conduwuit. allows debug, info, warn, or error
#log = "warn"
# controls whether encrypted rooms and events are allowed (default true)
#allow_encryption = false
# conduwuit will send a simple GET request periodically to `https://pupbrain.dev/check-for-updates/stable`
# for any new announcements made. Despite the name, this is not an update check
# endpoint, it is simply an announcement check endpoint. I don't plan on using
# this so feel free to disable it.
allow_check_for_updates = true
# Enables adding the lightning bolt emoji (⚡️) to all newly registered users'
# initial display names.
enable_lightning_bolt = false
# If you are using delegation via well-known files and you cannot serve them from your reverse proxy, you can
# uncomment these to serve them directly from conduwuit. This requires proxying all requests to conduwuit, not just `/_matrix` to work.
#well_known_server = "matrix.example.com:443"
#well_known_client = "https://matrix.example.com"
# Note that whatever you put will show up in the well-known JSON values.
# Set to false to disable users from joining or creating room versions that aren't 100% officially supported by conduwuit.
# conduwuit officially supports room versions 6 - 10. conduwuit has experimental/unstable support for 1 - 5, and 11.
# Defaults to true.
#allow_unstable_room_versions = true
# Set this to any float value to multiply conduwuit's in-memory LRU caches with.
# May be useful if you have significant memory to spare to increase performance.
# Defaults to 1.0.
#conduit_cache_capacity_modifier = 1.0
# Set this to any float value in megabytes for conduwuit to tell the database engine that this much memory is available for database-related caches.
# May be useful if you have significant memory to spare to increase performance.
# Defaults to 900.0
#db_cache_capacity_mb = 900.0
### RocksDB options
# Set this to true to use RocksDB config options that are tailored to HDDs (slower device storage)
#rocksdb_optimize_for_spinning_disks = false
# RocksDB log level. This is not the same as conduwuit's log level. This is the log level for RocksDB itself
# which show up in your database folder/path as `LOG` files. Defaults to warn. conduwuit will typically log RocksDB errors.
#rocksdb_log_level = "warn"
# Max RocksDB `LOG` file size before rotating in bytes. Defaults to 4MB.
#rocksdb_max_log_file_size = 4194304
# Time in seconds before RocksDB will forcibly rotate logs. Defaults to 0.
#rocksdb_log_time_to_roll = 0
### Presence
# Config option to control local (your server only) presence updates/requests. Defaults to false.
# Note that presence on conduwuit is very fast unlike Synapse's.
#allow_local_presence = false
# Config option to control incoming federated presence updates/requests. Defaults to false.
# This option receives presence updates from other servers, but does not send any unless `allow_outgoing_presence` is true.
# Note that presence on conduwuit is very fast unlike Synapse's.
#allow_incoming_presence = false
# Config option to control outgoing presence updates/requests. Defaults to false.
# This option sends presence updates to other servers, but does not receive any unless `allow_incoming_presence` is true.
# Note that presence on conduwuit is very fast unlike Synapse's.
#
# Warning: Outgoing federated presence is not spec compliant due to relying on PDUs and EDUs combined.
# Outgoing presence will not be very reliable due to this and any issues with federated outgoing presence are very likely attributed to this issue.
# Incoming presence and local presence are unaffected.
#allow_outgoing_presence = false
# Config option to control how many seconds before presence updates that you are idle. Defaults to 5 minutes.
#presence_idle_timeout_s = 300
# Config option to control how many seconds before presence updates that you are offline. Defaults to 30 minutes.
#presence_offline_timeout_s = 1800

46
flake.lock generated
View file

@ -7,11 +7,11 @@
]
},
"locked": {
"lastModified": 1704819371,
"narHash": "sha256-oFUfPWrWGQTZaCM3byxwYwrMLwshDxVGOrMH5cVP/X8=",
"lastModified": 1705974079,
"narHash": "sha256-HyC3C2esW57j6bG0MKwX4kQi25ltslRnr6z2uvpadJo=",
"owner": "ipetkov",
"repo": "crane",
"rev": "5c234301a1277e4cc759c23a2a7a00a06ddd7111",
"rev": "0b4e511fe6e346381e31d355e03de52aa43e8cb2",
"type": "github"
},
"original": {
@ -28,11 +28,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1705126891,
"narHash": "sha256-RnCWzRghSpyxKs3kXgYPkZv6TvzV3Pmve1je6RQHe1o=",
"lastModified": 1706077451,
"narHash": "sha256-kd8Mlh+4NIG/NIkXeEwSIlwQuvysKJM4BeLrt2nvcc8=",
"owner": "nix-community",
"repo": "fenix",
"rev": "89a02ff13d98d54f0b3b41f9b8326eb26d7cdc2e",
"rev": "77d5a2dd0b186c40953c435da6e0c2215d7e1dec",
"type": "github"
},
"original": {
@ -46,11 +46,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1701680307,
"narHash": "sha256-kAuep2h5ajznlPMD9rnQyffWG8EM/C73lejGofXvdM8=",
"lastModified": 1705309234,
"narHash": "sha256-uNRRNRKmJyCRC/8y1RqBkqWBLM034y4qN7EprSdmgyA=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "4022d587cbbfd70fe950c1e2083a02621806a725",
"rev": "1ef2e671c3b0c19053962c07dbda38332dcebf26",
"type": "github"
},
"original": {
@ -59,13 +59,28 @@
"type": "github"
}
},
"nix-filter": {
"locked": {
"lastModified": 1705332318,
"narHash": "sha256-kcw1yFeJe9N4PjQji9ZeX47jg0p9A0DuU4djKvg1a7I=",
"owner": "numtide",
"repo": "nix-filter",
"rev": "3449dc925982ad46246cfc36469baf66e1b64f17",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "nix-filter",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1704722960,
"narHash": "sha256-mKGJ3sPsT6//s+Knglai5YflJUF2DGj7Ai6Ynopz0kI=",
"lastModified": 1705856552,
"narHash": "sha256-JXfnuEf5Yd6bhMs/uvM67/joxYKoysyE3M2k6T3eWbg=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "317484b1ead87b9c1b8ac5261a8d2dd748a0492d",
"rev": "612f97239e2cc474c13c9dafa0df378058c5ad8d",
"type": "github"
},
"original": {
@ -80,17 +95,18 @@
"crane": "crane",
"fenix": "fenix",
"flake-utils": "flake-utils",
"nix-filter": "nix-filter",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1704974004,
"narHash": "sha256-H3RdtMxH8moTInVmracgtF8bgFpaEE3zYoSkuv7PBs0=",
"lastModified": 1705864945,
"narHash": "sha256-ZATChFWHToTZQFLlzrzDUX8fjEbMHHBIyPaZU1JGmjI=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "9d8889cdfcc3aa0302353fc988ed21ff9bc9925c",
"rev": "d410d4a2baf9e99b37b03dd42f06238b14374bf7",
"type": "github"
},
"original": {

View file

@ -2,6 +2,7 @@
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs?ref=nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
nix-filter.url = "github:numtide/nix-filter";
fenix = {
url = "github:nix-community/fenix";
@ -18,6 +19,7 @@
{ self
, nixpkgs
, flake-utils
, nix-filter
, fenix
, crane
@ -45,6 +47,7 @@
];
};
# Use mold on Linux
stdenv = if pkgs.stdenv.isLinux then
pkgs.stdenvAdapters.useMoldLinker pkgs.stdenv
else
@ -93,12 +96,44 @@
in
{
packages.default = builder {
src = ./.;
src = nix-filter {
root = ./.;
include = [
"src"
"Cargo.toml"
"Cargo.lock"
];
};
# This is redundant with CI
doCheck = false;
inherit
env
nativeBuildInputs
stdenv;
meta.mainProgram = cargoToml.package.name;
};
packages.oci-image =
let
package = self.packages.${system}.default;
in
pkgs.dockerTools.buildImage {
name = package.pname;
tag = "latest";
config = {
# Use the `tini` init system so that signals (e.g. ctrl+c/SIGINT) are
# handled as expected
Entrypoint = [
"${pkgs.lib.getExe' pkgs.tini "tini"}"
"--"
];
Cmd = [
"${pkgs.lib.getExe package}"
];
};
};
devShells.default = (pkgs.mkShell.override { inherit stdenv; }) {

View file

@ -74,11 +74,8 @@ pub async fn get_register_available_route(
/// - Creates a new account and populates it with default account data
/// - If `inhibit_login` is false: Creates a device and returns device id and access_token
pub async fn register_route(body: Ruma<register::v3::Request>) -> Result<register::v3::Response> {
if !services().globals.allow_registration()
&& !body.from_appservice
&& services().globals.config.registration_token.is_none()
{
info!("Registration disabled, no reg token configured, rejecting registration attempt for username {:?}", body.username);
if !services().globals.allow_registration() && !body.from_appservice {
info!("Registration disabled and request not from known appservice, rejecting registration attempt for username {:?}", body.username);
return Err(Error::BadRequest(
ErrorKind::Forbidden,
"Registration has been disabled.",
@ -89,10 +86,10 @@ pub async fn register_route(body: Ruma<register::v3::Request>) -> Result<registe
if is_guest
&& (!services().globals.allow_guest_registration()
|| (!services().globals.allow_registration()
|| (services().globals.allow_registration()
&& services().globals.config.registration_token.is_some()))
{
info!("Guest registration disabled / registration disabled with token configured, rejecting guest registration, initial device name: {:?}", body.initial_device_display_name);
info!("Guest registration disabled / registration enabled with token configured, rejecting guest registration, initial device name: {:?}", body.initial_device_display_name);
return Err(Error::BadRequest(
ErrorKind::GuestAccessForbidden,
"Guest registration is disabled.",
@ -144,21 +141,35 @@ pub async fn register_route(body: Ruma<register::v3::Request>) -> Result<registe
};
// UIAA
let mut uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: if services().globals.config.registration_token.is_some() {
vec![AuthType::RegistrationToken]
} else {
vec![AuthType::Dummy]
},
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
let mut uiaainfo;
let skip_auth;
if services().globals.config.registration_token.is_some() {
// Registration token required
uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::RegistrationToken],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
skip_auth = body.from_appservice;
} else {
// No registration token necessary, but clients must still go through the flow
uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::Dummy],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
skip_auth = body.from_appservice || is_guest;
}
if !body.from_appservice && !is_guest {
if !skip_auth {
if let Some(auth) = &body.auth {
let (worked, uiaainfo) = services().uiaa.try_auth(
&UserId::parse_with_server_name("", services().globals.server_name())

View file

@ -28,7 +28,7 @@ use ruma::{
};
use tracing::{error, info, warn};
/// # `POST /_matrix/client/r0/publicRooms`
/// # `POST /_matrix/client/v3/publicRooms`
///
/// Lists the public rooms on this server.
///
@ -41,10 +41,7 @@ pub async fn get_public_rooms_filtered_route(
.config
.allow_public_room_directory_without_auth
{
let _sender_user = body
.sender_user
.as_ref()
.ok_or_else(|| Error::BadRequest(ErrorKind::MissingToken, "Missing access token."))?;
let _sender_user = body.sender_user.as_ref().expect("user is authenticated");
}
get_public_rooms_filtered_helper(
@ -57,7 +54,7 @@ pub async fn get_public_rooms_filtered_route(
.await
}
/// # `GET /_matrix/client/r0/publicRooms`
/// # `GET /_matrix/client/v3/publicRooms`
///
/// Lists the public rooms on this server.
///
@ -70,10 +67,7 @@ pub async fn get_public_rooms_route(
.config
.allow_public_room_directory_without_auth
{
let _sender_user = body
.sender_user
.as_ref()
.ok_or_else(|| Error::BadRequest(ErrorKind::MissingToken, "Missing access token."))?;
let _sender_user = body.sender_user.as_ref().expect("user is authenticated");
}
let response = get_public_rooms_filtered_helper(

View file

@ -65,6 +65,8 @@ pub async fn get_remote_content(
mxc: &str,
server_name: &ruma::ServerName,
media_id: String,
allow_redirect: bool,
timeout_ms: Duration,
) -> Result<get_content::v3::Response, Error> {
// we'll lie to the client and say the blocked server's media was not found and log.
// the client has no way of telling anyways so this is a security bonus.
@ -82,11 +84,11 @@ pub async fn get_remote_content(
.send_federation_request(
server_name,
get_content::v3::Request {
allow_remote: false,
allow_remote: true,
server_name: server_name.to_owned(),
media_id,
timeout_ms: Duration::from_secs(20),
allow_redirect: false,
timeout_ms,
allow_redirect,
},
)
.await?;
@ -109,6 +111,8 @@ pub async fn get_remote_content(
/// Load media from our server or over federation.
///
/// - Only allows federation if `allow_remote` is true
/// - Only redirects if `allow_redirect` is true
/// - Uses client-provided `timeout_ms` if available, else defaults to 20 seconds
pub async fn get_content_route(
body: Ruma<get_content::v3::Request>,
) -> Result<get_content::v3::Response> {
@ -127,8 +131,14 @@ pub async fn get_content_route(
cross_origin_resource_policy: Some("cross-origin".to_owned()),
})
} else if &*body.server_name != services().globals.server_name() && body.allow_remote {
let remote_content_response =
get_remote_content(&mxc, &body.server_name, body.media_id.clone()).await?;
let remote_content_response = get_remote_content(
&mxc,
&body.server_name,
body.media_id.clone(),
body.allow_redirect,
body.timeout_ms,
)
.await?;
Ok(remote_content_response)
} else {
Err(Error::BadRequest(ErrorKind::NotFound, "Media not found."))
@ -140,6 +150,8 @@ pub async fn get_content_route(
/// Load media from our server or over federation, permitting desired filename.
///
/// - Only allows federation if `allow_remote` is true
/// - Only redirects if `allow_redirect` is true
/// - Uses client-provided `timeout_ms` if available, else defaults to 20 seconds
pub async fn get_content_as_filename_route(
body: Ruma<get_content_as_filename::v3::Request>,
) -> Result<get_content_as_filename::v3::Response> {
@ -156,8 +168,14 @@ pub async fn get_content_as_filename_route(
cross_origin_resource_policy: Some("cross-origin".to_owned()),
})
} else if &*body.server_name != services().globals.server_name() && body.allow_remote {
let remote_content_response =
get_remote_content(&mxc, &body.server_name, body.media_id.clone()).await?;
let remote_content_response = get_remote_content(
&mxc,
&body.server_name,
body.media_id.clone(),
body.allow_redirect,
body.timeout_ms,
)
.await?;
Ok(get_content_as_filename::v3::Response {
content_disposition: Some(format!("inline: filename={}", body.filename)),
@ -175,6 +193,8 @@ pub async fn get_content_as_filename_route(
/// Load media thumbnail from our server or over federation.
///
/// - Only allows federation if `allow_remote` is true
/// - Only redirects if `allow_redirect` is true
/// - Uses client-provided `timeout_ms` if available, else defaults to 20 seconds
pub async fn get_content_thumbnail_route(
body: Ruma<get_content_thumbnail::v3::Request>,
) -> Result<get_content_thumbnail::v3::Response> {
@ -191,7 +211,7 @@ pub async fn get_content_thumbnail_route(
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Width is invalid."))?,
body.height
.try_into()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Width is invalid."))?,
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Height is invalid."))?,
)
.await?
{
@ -217,14 +237,14 @@ pub async fn get_content_thumbnail_route(
.send_federation_request(
&body.server_name,
get_content_thumbnail::v3::Request {
allow_remote: false,
allow_remote: body.allow_remote,
height: body.height,
width: body.width,
method: body.method.clone(),
server_name: body.server_name.clone(),
media_id: body.media_id.clone(),
timeout_ms: Duration::from_secs(20),
allow_redirect: false,
timeout_ms: body.timeout_ms,
allow_redirect: body.allow_redirect,
},
)
.await?;

View file

@ -130,7 +130,7 @@ pub async fn join_room_by_id_or_alias_route(
})
}
/// # `POST /_matrix/client/r0/rooms/{roomId}/leave`
/// # `POST /_matrix/client/v3/rooms/{roomId}/leave`
///
/// Tries to leave the sender user from a room.
///
@ -1242,7 +1242,7 @@ pub(crate) async fn invite_helper(
let state_lock = mutex_state.lock().await;
let content = to_raw_value(&RoomMemberEventContent {
avatar_url: None,
avatar_url: services().users.avatar_url(user_id)?,
displayname: None,
is_direct: Some(is_direct),
membership: MembershipState::Invite,

View file

@ -23,13 +23,14 @@ use ruma::{
},
int,
serde::JsonObject,
CanonicalJsonObject, OwnedRoomAliasId, RoomAliasId, RoomId, RoomVersionId,
CanonicalJsonObject, CanonicalJsonValue, OwnedRoomAliasId, OwnedRoomId, RoomAliasId, RoomId,
RoomVersionId,
};
use serde_json::{json, value::to_raw_value};
use std::{cmp::max, collections::BTreeMap, sync::Arc};
use tracing::{info, warn};
use tracing::{debug, error, info, warn};
/// # `POST /_matrix/client/r0/createRoom`
/// # `POST /_matrix/client/v3/createRoom`
///
/// Creates a new room.
///
@ -52,7 +53,73 @@ pub async fn create_room_route(
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let room_id = RoomId::new(services().globals.server_name());
if !services().globals.allow_room_creation()
&& !&body.from_appservice
&& !services().users.is_admin(sender_user)?
{
return Err(Error::BadRequest(
ErrorKind::Forbidden,
"Room creation has been disabled.",
));
}
let room_id: OwnedRoomId;
// checks if the user specified an explicit (custom) room_id to be created with in request body.
// falls back to normal generated room ID if not specified.
if let Some(CanonicalJsonValue::Object(json_body)) = &body.json_body {
match json_body.get("room_id") {
Some(custom_room_id) => {
let custom_room_id_s = custom_room_id.to_string();
// do some checks on the custom room ID similar to room aliases
if custom_room_id_s.contains(':') {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Custom room ID contained `:` which is not allowed. Please note that this expects a localpart, not the full room ID.",
));
} else if custom_room_id_s.contains(char::is_whitespace) {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Custom room ID contained spaces which is not valid.",
));
} else if custom_room_id_s.len() > 255 {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Custom room ID is too long.",
));
}
let full_room_id = "!".to_owned()
+ &custom_room_id_s.replace('"', "")
+ ":"
+ services().globals.server_name().as_ref();
debug!("Full room ID: {}", full_room_id);
room_id = RoomId::parse(full_room_id).map_err(|e| {
info!(
"User attempted to create room with custom room ID but failed parsing: {}",
e
);
Error::BadRequest(
ErrorKind::InvalidParam,
"Custom room ID could not be parsed",
)
})?;
}
None => room_id = RoomId::new(services().globals.server_name()),
}
} else {
room_id = RoomId::new(services().globals.server_name())
}
// check if room ID doesn't already exist instead of erroring on auth check
if services().rooms.short.get_shortroomid(&room_id)?.is_some() {
return Err(Error::BadRequest(
ErrorKind::RoomInUse,
"Room with that custom room ID already exists",
));
}
services().rooms.short.get_or_create_shortroomid(&room_id)?;
@ -67,27 +134,47 @@ pub async fn create_room_route(
);
let state_lock = mutex_state.lock().await;
if !services().globals.allow_room_creation()
&& !body.from_appservice
&& !services().users.is_admin(sender_user)?
{
return Err(Error::BadRequest(
ErrorKind::Forbidden,
"Room creation has been disabled.",
));
}
let alias: Option<OwnedRoomAliasId> =
body.room_alias_name
.as_ref()
.map_or(Ok(None), |localpart| {
// TODO: Check for invalid characters and maximum length
// Basic checks on the room alias validity
if localpart.contains(':') {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Room alias contained `:` which is not allowed. Please note that this expects a localpart, not the full room alias.",
));
} else if localpart.contains(char::is_whitespace) {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Room alias contained spaces which is not a valid room alias.",
));
} else if localpart.len() > 255 {
// there is nothing spec-wise saying to check the limit of this,
// however absurdly long room aliases are guaranteed to be unreadable or done maliciously.
// there is no reason a room alias should even exceed 100 characters as is.
// generally in spec, 255 is matrix's fav number
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Room alias is excessively long, clients may not be able to handle this. Please shorten it.",
));
} else if localpart.contains('"') {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Room alias contained `\"` which is not allowed.",
));
}
let alias = RoomAliasId::parse(format!(
"#{}:{}",
localpart,
services().globals.server_name()
))
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid alias."))?;
.map_err(|e| {
warn!("Failed to parse room alias for room ID {}: {e}", room_id);
Error::BadRequest(ErrorKind::InvalidParam, "Invalid room alias specified.")
})?;
if services()
.rooms
@ -126,7 +213,10 @@ pub async fn create_room_route(
Some(content) => {
let mut content = content
.deserialize_as::<CanonicalJsonObject>()
.expect("Invalid creation content");
.map_err(|e| {
error!("Failed to deserialise content as canonical JSON: {}", e);
Error::bad_database("Failed to deserialise content as canonical JSON.")
})?;
match room_version {
RoomVersionId::V1
| RoomVersionId::V2

View file

@ -4,7 +4,14 @@ use argon2::{PasswordHash, PasswordVerifier};
use ruma::{
api::client::{
error::ErrorKind,
session::{get_login_types, login, logout, logout_all},
session::{
get_login_types,
login::{
self,
v3::{DiscoveryInfo, HomeserverInfo},
},
logout, logout_all,
},
uiaa::UserIdentifier,
},
UserId,
@ -18,7 +25,7 @@ struct Claims {
//exp: usize,
}
/// # `GET /_matrix/client/r0/login`
/// # `GET /_matrix/client/v3/login`
///
/// Get the supported login types of this server. One of these should be used as the `type` field
/// when logging in.
@ -31,7 +38,7 @@ pub async fn get_login_types_route(
]))
}
/// # `POST /_matrix/client/r0/login`
/// # `POST /_matrix/client/v3/login`
///
/// Authenticates the user and returns an access token it can use in subsequent requests.
///
@ -166,12 +173,38 @@ pub async fn login_route(body: Ruma<login::v3::Request>) -> Result<login::v3::Re
)?;
}
// send client well-known if specified so the client knows to reconfigure itself
let client_discovery_info = DiscoveryInfo::new(HomeserverInfo::new(
services()
.globals
.well_known_client()
.to_owned()
.unwrap_or("".to_owned()),
));
info!("{} logged in", user_id);
Ok(login::v3::Response::new(user_id, token, device_id))
// home_server is deprecated but apparently must still be sent despite it being deprecated over 6 years ago.
// initially i thought this macro was unnecessary, but ruma uses this same macro for the same reason so...
#[allow(deprecated)]
Ok(login::v3::Response {
user_id,
access_token: token,
device_id,
well_known: {
if client_discovery_info.homeserver.base_url.as_str() == "" {
None
} else {
Some(client_discovery_info)
}
},
expires_in: None,
home_server: Some(services().globals.server_name().to_owned()),
refresh_token: None,
})
}
/// # `POST /_matrix/client/r0/logout`
/// # `POST /_matrix/client/v3/logout`
///
/// Log out the current device.
///

View file

@ -1640,9 +1640,7 @@ pub async fn sync_events_v4_route(
ruma::JsOption::Some(heroes_avatar)
} else {
match services().rooms.state_accessor.get_avatar(room_id)? {
ruma::JsOption::Some(avatar) => {
js_option::JsOption::Some(avatar.url.unwrap())
}
ruma::JsOption::Some(avatar) => ruma::JsOption::Some(avatar.url.unwrap()),
ruma::JsOption::Null => ruma::JsOption::Null,
ruma::JsOption::Undefined => ruma::JsOption::Undefined,
}

View file

@ -23,6 +23,12 @@ use tracing::{debug, error, warn};
use super::{Ruma, RumaResponse};
use crate::{services, Error, Result};
#[derive(Deserialize)]
struct QueryParams {
access_token: Option<String>,
user_id: Option<String>,
}
#[async_trait]
impl<T, S, B> FromRequest<S, B> for Ruma<T>
where
@ -34,12 +40,6 @@ where
type Rejection = Error;
async fn from_request(req: Request<B>, _state: &S) -> Result<Self, Self::Rejection> {
#[derive(Deserialize)]
struct QueryParams {
access_token: Option<String>,
user_id: Option<String>,
}
let (mut parts, mut body) = match req.with_limited_body() {
Ok(limited_req) => {
let (parts, body) = limited_req.into_parts();
@ -263,7 +263,44 @@ where
}
}
}
AuthScheme::None => (None, None, None, false),
AuthScheme::None => match parts.uri.path() {
// allow_public_room_directory_without_auth
"/_matrix/client/v3/publicRooms" | "/_matrix/client/r0/publicRooms" => {
if !services()
.globals
.config
.allow_public_room_directory_without_auth
{
let token = match token {
Some(token) => token,
_ => {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing access token.",
))
}
};
match services().users.find_from_token(token).unwrap() {
None => {
return Err(Error::BadRequest(
ErrorKind::UnknownToken { soft_logout: false },
"Unknown access token.",
))
}
Some((user_id, device_id)) => (
Some(user_id),
Some(OwnedDeviceId::from(device_id)),
None,
false,
),
}
} else {
(None, None, None, false)
}
}
_ => (None, None, None, false),
},
}
};

View file

@ -11,6 +11,7 @@ use futures_util::future::TryFutureExt;
use get_profile_information::v1::ProfileField;
use http::header::{HeaderValue, AUTHORIZATION};
use ipaddress::IPAddress;
use ruma::{
api::{
client::error::{Error as RumaError, ErrorKind},
@ -114,7 +115,6 @@ impl FedDest {
}
}
#[tracing::instrument(skip(request))]
pub(crate) async fn send_request<T: OutgoingRequest>(
destination: &ServerName,
request: T,
@ -132,6 +132,36 @@ where
));
}
if destination.is_ip_literal() || IPAddress::is_valid(destination.host()) {
info!(
"Destination {} is an IP literal, checking against IP range denylist.",
destination
);
let ip = IPAddress::parse(destination.host()).map_err(|e| {
warn!("Failed to parse IP literal from string: {}", e);
Error::BadServerResponse("Invalid IP address")
})?;
let cidr_ranges_s = services().globals.ip_range_denylist().to_vec();
let mut cidr_ranges: Vec<IPAddress> = Vec::new();
for cidr in cidr_ranges_s {
cidr_ranges.push(IPAddress::parse(cidr).expect("we checked this at startup"));
}
debug!("List of pushed CIDR ranges: {:?}", cidr_ranges);
for cidr in cidr_ranges {
if cidr.includes(&ip) {
return Err(Error::BadServerResponse(
"Not allowed to send requests to this IP",
));
}
}
info!("IP literal {} is allowed.", destination);
}
debug!("Preparing to send request to {destination}");
let mut write_destination_to_cache = false;
@ -563,14 +593,25 @@ async fn request_well_known(destination: &str) -> Option<String> {
.await;
debug!("Got well known response");
debug!("Well known response: {:?}", response);
if let Err(e) = &response {
debug!("Well known error: {e:?}");
return None;
}
let text = response.ok()?.text().await;
debug!("Got well known response text");
debug!("Well known response text: {:?}", text);
if text.as_ref().ok()?.len() > 10000 {
info!("Well known response for destination '{destination}' exceeded past 10000 characters, assuming no well-known.");
return None;
}
let body: serde_json::Value = serde_json::from_str(&text.ok()?).ok()?;
debug!("serde_json body of well known text: {}", body);
Some(body.get("m.server")?.as_str()?.to_owned())
}

View file

@ -6,6 +6,7 @@ use std::{
};
use figment::Figment;
use ruma::{OwnedServerName, RoomVersionId};
use serde::{de::IgnoredAny, Deserialize};
use tracing::{error, warn};
@ -39,8 +40,6 @@ pub struct Config {
pub allow_check_for_updates: bool,
#[serde(default = "default_conduit_cache_capacity_modifier")]
pub conduit_cache_capacity_modifier: f64,
#[serde(default = "default_rocksdb_max_open_files")]
pub rocksdb_max_open_files: i32,
#[serde(default = "default_pdu_cache_capacity")]
pub pdu_cache_capacity: u32,
#[serde(default = "default_cleanup_second_interval")]
@ -96,7 +95,6 @@ pub struct Config {
#[serde(default = "default_turn_ttl")]
pub turn_ttl: u64,
pub rocksdb_log_path: Option<PathBuf>,
#[serde(default = "default_rocksdb_log_level")]
pub rocksdb_log_level: String,
#[serde(default = "default_rocksdb_max_log_file_size")]
@ -131,6 +129,9 @@ pub struct Config {
#[serde(default = "Vec::new")]
pub prevent_media_downloads_from: Vec<OwnedServerName>,
#[serde(default = "default_ip_range_denylist")]
pub ip_range_denylist: Vec<String>,
#[serde(flatten)]
pub catchall: BTreeMap<String, IgnoredAny>,
}
@ -191,11 +192,6 @@ impl fmt::Display for Config {
"Cache capacity modifier",
&self.conduit_cache_capacity_modifier.to_string(),
),
#[cfg(feature = "rocksdb")]
(
"Maximum open files for RocksDB",
&self.rocksdb_max_open_files.to_string(),
),
("PDU cache capacity", &self.pdu_cache_capacity.to_string()),
(
"Cleanup interval in seconds",
@ -315,6 +311,14 @@ impl fmt::Display for Config {
}
&lst.join(", ")
}),
("Outbound Request IP Range Denylist", {
let mut lst = vec![];
for item in self.ip_range_denylist.iter().cloned().enumerate() {
let (_, ip): (usize, String) = item;
lst.push(ip);
}
&lst.join(", ")
}),
];
let mut msg: String = "Active config values:\n\n".to_owned();
@ -355,10 +359,6 @@ fn default_conduit_cache_capacity_modifier() -> f64 {
1.0
}
fn default_rocksdb_max_open_files() -> i32 {
1000
}
fn default_pdu_cache_capacity() -> u32 {
150_000
}
@ -420,3 +420,27 @@ fn default_rocksdb_max_log_file_size() -> usize {
// 4 megabytes
4 * 1024 * 1024
}
fn default_ip_range_denylist() -> Vec<String> {
vec![
"127.0.0.0/8".to_owned(),
"10.0.0.0/8".to_owned(),
"172.16.0.0/12".to_owned(),
"192.168.0.0/16".to_owned(),
"100.64.0.0/10".to_owned(),
"192.0.0.0/24".to_owned(),
"169.254.0.0/16".to_owned(),
"192.88.99.0/24".to_owned(),
"198.18.0.0/15".to_owned(),
"192.0.2.0/24".to_owned(),
"198.51.100.0/24".to_owned(),
"203.0.113.0/24".to_owned(),
"224.0.0.0/4".to_owned(),
"::1/128".to_owned(),
"fe80::/10".to_owned(),
"fc00::/7".to_owned(),
"2001:db8::/32".to_owned(),
"ff00::/8".to_owned(),
"fec0::/10".to_owned(),
]
}

View file

@ -32,7 +32,6 @@ use crate::Result;
pub enum ProxyConfig {
#[default]
None,
Global {
#[serde(deserialize_with = "crate::utils::deserialize_from_str")]
url: Url,

View file

@ -69,7 +69,7 @@ fn db_options(rocksdb_cache: &rocksdb::Cache, config: &Config) -> rocksdb::Optio
db_opts.set_level_compaction_dynamic_level_bytes(true);
db_opts.create_if_missing(true);
db_opts.increase_parallelism(num_cpus::get() as i32);
db_opts.set_max_open_files(config.rocksdb_max_open_files);
//db_opts.set_max_open_files(config.rocksdb_max_open_files);
db_opts.set_compression_type(rocksdb::DBCompressionType::Zstd);
db_opts.set_compaction_style(rocksdb::DBCompactionStyle::Level);
db_opts.optimize_level_style_compaction(10 * 1024 * 1024);

View file

@ -32,10 +32,10 @@ use tokio::{net::UnixListener, signal, sync::oneshot};
use tower::ServiceBuilder;
use tower_http::{
cors::{self, CorsLayer},
trace::TraceLayer,
trace::{DefaultOnFailure, TraceLayer},
ServiceBuilderExt as _,
};
use tracing::{debug, error, info, warn};
use tracing::{debug, error, info, warn, Level};
use tracing_subscriber::{prelude::*, EnvFilter};
use tokio::sync::oneshot::Sender;
@ -147,11 +147,18 @@ async fn main() {
};
let config = &services().globals.config;
// check if user specified valid IP CIDR ranges on startup
for cidr in services().globals.ip_range_denylist() {
let _ = ipaddress::IPAddress::parse(cidr)
.map_err(|e| error!("Error parsing specified IP CIDR range: {e}"));
}
if config.allow_registration
&& !config.yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse
&& config.registration_token.is_none()
{
error!("!! You have `allow_registration` enabled in your config which means you are allowing ANYONE to register on your conduwuit instance without any 2nd-step (e.g. registration token).\n
If this is not the intended behaviour, please disable `allow_registration` and set a registration token.\n
error!("!! You have `allow_registration` enabled without a token configured in your config which means you are allowing ANYONE to register on your conduwuit instance without any 2nd-step (e.g. registration token).\n
If this is not the intended behaviour, please set a registration token with the `registration_token` config option.\n
For security and safety reasons, conduwuit will shut down. If you are extra sure this is the desired behaviour you want, please set the following config option to true:
`yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse`");
return;
@ -159,9 +166,14 @@ async fn main() {
if config.allow_registration
&& config.yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse
&& config.registration_token.is_none()
{
warn!("Open registration is enabled via setting `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse` and `allow_registration` to true. You are expected to be aware of the risks now.\n
If this is not the desired behaviour, please disable `allow_registration` and set a registration token.");
warn!("Open registration is enabled via setting `yes_i_am_very_very_sure_i_want_an_open_registration_server_prone_to_abuse` and `allow_registration` to true without a registration token configured. You are expected to be aware of the risks now.\n
If this is not the desired behaviour, please set a registration token.");
}
if config.allow_outgoing_presence {
warn!("! Outgoing federated presence is not spec compliant due to relying on PDUs and EDUs combined.\nOutgoing presence will not be very reliable due to this and any issues with federated outgoing presence are very likely attributed to this issue.\nIncoming presence and local presence are unaffected.");
}
info!("Starting server");
@ -186,15 +198,17 @@ async fn run_server() -> io::Result<()> {
.sensitive_headers([header::AUTHORIZATION])
.layer(axum::middleware::from_fn(spawn_task))
.layer(
TraceLayer::new_for_http().make_span_with(|request: &http::Request<_>| {
let path = if let Some(path) = request.extensions().get::<MatchedPath>() {
path.as_str()
} else {
request.uri().path()
};
TraceLayer::new_for_http()
.make_span_with(|request: &http::Request<_>| {
let path = if let Some(path) = request.extensions().get::<MatchedPath>() {
path.as_str()
} else {
request.uri().path()
};
tracing::info_span!("http_request", %path)
}),
tracing::info_span!("http_request", %path)
})
.on_failure(DefaultOnFailure::new().level(Level::INFO)),
)
.layer(axum::middleware::from_fn(unrecognized_method))
.layer(

View file

@ -111,15 +111,13 @@ impl Default for RotationHandler {
}
}
type DnsOverrides = Box<dyn Fn(&str) -> Option<SocketAddr> + Send + Sync>;
struct Resolver {
inner: GaiResolver,
overrides: DnsOverrides,
overrides: Arc<RwLock<TlsNameMap>>,
}
impl Resolver {
fn new(overrides: DnsOverrides) -> Resolver {
fn new(overrides: Arc<RwLock<TlsNameMap>>) -> Self {
Resolver {
inner: GaiResolver::new(),
overrides,
@ -129,16 +127,26 @@ impl Resolver {
impl Resolve for Resolver {
fn resolve(&self, name: Name) -> Resolving {
if let Some(addr) = (self.overrides)(name.as_str()) {
let once: Box<dyn Iterator<Item = SocketAddr> + Send> = Box::new(iter::once(addr));
return Box::pin(future::ready(Ok(once)));
}
let this = &mut self.inner.clone();
Box::pin(HyperService::<Name>::call(this, name).map(|result| {
result
.map(|addrs| -> Addrs { Box::new(addrs) })
.map_err(|err| -> Box<dyn StdError + Send + Sync> { Box::new(err) })
}))
self.overrides
.read()
.expect("lock should not be poisoned")
.get(name.as_str())
.and_then(|(override_name, port)| {
override_name.first().map(|first_name| {
let x: Box<dyn Iterator<Item = SocketAddr> + Send> =
Box::new(iter::once(SocketAddr::new(*first_name, *port)));
let x: Resolving = Box::pin(future::ready(Ok(x)));
x
})
})
.unwrap_or_else(|| {
let this = &mut self.inner.clone();
Box::pin(HyperService::<Name>::call(this, name).map(|result| {
result
.map(|addrs| -> Addrs { Box::new(addrs) })
.map_err(|err| -> Box<dyn StdError + Send + Sync> { Box::new(err) })
}))
})
}
}
@ -163,14 +171,8 @@ impl Service<'_> {
.map(|secret| jsonwebtoken::DecodingKey::from_secret(secret.as_bytes()));
let default_client = reqwest_client_builder(&config)?.build()?;
let name_override = Arc::clone(&tls_name_override);
let federation_client = reqwest_client_builder(&config)?
.dns_resolver(Arc::new(Resolver::new(Box::new(move |domain| {
let read_guard = name_override.read().unwrap();
let (override_name, port) = read_guard.get(domain)?;
let first_name = override_name.first()?;
Some(SocketAddr::new(*first_name, *port))
}))))
.dns_resolver(Arc::new(Resolver::new(tls_name_override.clone())))
.build()?;
// Supported and stable room versions
@ -395,7 +397,7 @@ impl Service<'_> {
self.config.allow_incoming_presence
}
pub fn allow_outcoming_presence(&self) -> bool {
pub fn allow_outgoing_presence(&self) -> bool {
self.config.allow_outgoing_presence
}
@ -427,6 +429,10 @@ impl Service<'_> {
&self.config.prevent_media_downloads_from
}
pub fn ip_range_denylist(&self) -> &[String] {
&self.config.ip_range_denylist
}
pub fn supported_room_versions(&self) -> Vec<RoomVersionId> {
let mut room_versions: Vec<RoomVersionId> = vec![];
room_versions.extend(self.stable_room_versions.clone());

View file

@ -200,7 +200,13 @@ impl Service {
let ctx = PushConditionRoomCtx {
room_id: room_id.to_owned(),
member_count: 10_u32.into(), // TODO: get member count efficiently
member_count: UInt::from(
services()
.rooms
.state_cache
.room_joined_count(room_id)?
.unwrap_or(1) as u32,
),
user_id: user.to_owned(),
user_display_name: services()
.users

View file

@ -5,7 +5,6 @@ use std::{
};
pub use data::Data;
use js_option::JsOption;
use lru_cache::LruCache;
use ruma::{
events::{
@ -291,12 +290,12 @@ impl Service {
})
}
pub fn get_avatar(&self, room_id: &RoomId) -> Result<JsOption<RoomAvatarEventContent>> {
pub fn get_avatar(&self, room_id: &RoomId) -> Result<ruma::JsOption<RoomAvatarEventContent>> {
services()
.rooms
.state_accessor
.room_state_get(room_id, &StateEventType::RoomAvatar, "")?
.map_or(Ok(JsOption::Undefined), |s| {
.map_or(Ok(ruma::JsOption::Undefined), |s| {
serde_json::from_str(s.content.get())
.map_err(|_| Error::bad_database("Invalid room avatar event in database."))
})

View file

@ -1,6 +1,7 @@
mod data;
pub use data::Data;
use ipaddress::IPAddress;
use std::{
collections::{BTreeMap, HashMap, HashSet},
@ -43,7 +44,7 @@ use tokio::{
select,
sync::{mpsc, Mutex, Semaphore},
};
use tracing::{debug, error, warn};
use tracing::{debug, error, info, warn};
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub enum OutgoingKind {
@ -287,7 +288,7 @@ impl Service {
.filter(|user_id| user_id.server_name() == services().globals.server_name()),
);
if services().globals.allow_outcoming_presence() {
if services().globals.allow_outgoing_presence() {
// Look for presence updates in this room
let mut presence_updates = Vec::new();
@ -716,6 +717,36 @@ impl Service {
where
T: Debug,
{
if destination.is_ip_literal() || IPAddress::is_valid(destination.host()) {
info!(
"Destination {} is an IP literal, checking against IP range denylist.",
destination
);
let ip = IPAddress::parse(destination.host()).map_err(|e| {
warn!("Failed to parse IP literal from string: {}", e);
Error::BadServerResponse("Invalid IP address")
})?;
let cidr_ranges_s = services().globals.ip_range_denylist().to_vec();
let mut cidr_ranges: Vec<IPAddress> = Vec::new();
for cidr in cidr_ranges_s {
cidr_ranges.push(IPAddress::parse(cidr).expect("we checked this at startup"));
}
debug!("List of pushed CIDR ranges: {:?}", cidr_ranges);
for cidr in cidr_ranges {
if cidr.includes(&ip) {
return Err(Error::BadServerResponse(
"Not allowed to send requests to this IP",
));
}
}
info!("IP literal {} is allowed.", destination);
}
debug!("Waiting for permit");
let permit = self.maximum_requests.acquire().await;
debug!("Got permit");