Compare commits

..

88 commits

Author SHA1 Message Date
June Clementine Strawberry
d8311a5ff6
bump crossbeam-channel bc yanked crate with potential double free
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-08 23:38:54 -04:00
June Clementine Strawberry
47f8345457
bump tokio because of RUSTSEC-2025-0023
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-08 09:05:49 -04:00
June Clementine Strawberry
99868b1661
update new complement flakes
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-06 16:11:35 -04:00
June Clementine Strawberry
d5ad973464
change forbidden_server_names and etc to allow regex patterns for wildcards
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-06 15:25:19 -04:00
June Clementine Strawberry
ff276a42a3
drop unnecessary info log to debug
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-06 13:19:09 -04:00
June Clementine Strawberry
5f8c68ab84
add trace logging for room summaries, use server_in_room instead of exists
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-06 13:17:13 -04:00
June Clementine Strawberry
6578b83bce
parallelise IO of user searching, improve perf, raise max limit to 500
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 20:09:22 -04:00
June Clementine Strawberry
3cc92b32ec
bump rust toolchain to 1.86.0
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 18:37:13 -04:00
June Clementine Strawberry
9678948daf
use patch of resolv-conf crate to allow no-aaaa resolv.conf option
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 18:33:43 -04:00
Jason Volk
500faa8d7f simplify space join rules related
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-05 22:12:33 +00:00
Jason Volk
d6cc447add simplify acl brick-check conditions
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-05 22:12:33 +00:00
June Clementine Strawberry
e28ae8fb4d
downgrade deranged crate
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 14:26:00 -04:00
June Clementine Strawberry
c7246662f4
try partially reverting 94b107b42b
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 14:07:37 -04:00
June Clementine Strawberry
a212bf7cfc
update default room version to v11
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-05 14:00:40 -04:00
Jason Volk
58b8c7516a extend extract_variant to multiple variants
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-05 02:44:46 +00:00
Jason Volk
bb8320a691 abstract and encapsulate the awkward OptionFuture into Stream pattern
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-05 02:44:46 +00:00
Jason Volk
532dfd004d move core::pdu and core::state_res into core::matrix::
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-05 02:44:46 +00:00
June Clementine Strawberry
4e5b87d0cd
add missing condition for signatures upload failures
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-04 11:34:31 -04:00
Jason Volk
00f7745ec4 remove the db pool queue full warning
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-04 02:59:54 +00:00
Jason Volk
d036394ec7 refactor incoming prev events loop; mitigate large future
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 22:40:40 +00:00
Jason Volk
6a073b4fa4 remove additional unnecessary Arc
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 22:40:40 +00:00
Jason Volk
b7109131e2 further simplify get_missing_events; various log calls
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 22:40:40 +00:00
June Clementine Strawberry
94b107b42b add some debug logging and misc cleanup to keys/signatures/upload
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-03 16:08:18 -04:00
Jason Volk
29d55b8036 move systemd stopping notification point
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
Jason Volk
45fd3875c8 move runtime shutdown out of main; gather final stats
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
Jason Volk
f9529937ce patch hyper-util due to conflicts with federation resolver hooks
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
Jason Volk
0b56204f89 bump additional dependencies
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
Jason Volk
58adb6fead upgrade hickory and hyper-util dependencies
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
Jason Volk
5d1404e9df fix well-known using the hooked resolver
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-03 19:38:51 +00:00
June Clementine Strawberry
f14756fb76 leave room locally if room is banned, rescind knocks on deactivation too
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-03 12:21:16 -04:00
June Clementine Strawberry
24be579477 add appservice MSC4190 support
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-03 12:21:16 -04:00
June Clementine Strawberry
0e0b8cc403
fixup+update msc3266, add fed support, parallelise IO
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-03 00:56:37 -04:00
June Clementine Strawberry
1036f8dfa8
default shared history vis on unknown visibilities, drop needless error log
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-02 22:46:01 -04:00
June Clementine Strawberry
74012c5289
significantly improve get_missing_events fed code
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-02 22:44:44 -04:00
June Clementine Strawberry
ea246d91d9
remove pointless and buggy *_visibility in-memory caches
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-02 22:38:47 -04:00
June Clementine Strawberry
1b71b99c51
fix weird issue with acl c2s check
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-04-02 10:49:38 -04:00
Jason Volk
0f81c1e1cc revert hyper-util upgrade due to continued DNS issues
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 22:17:08 -04:00
Jason Volk
bee1f89624 bump dependencies
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 05:03:52 +00:00
Jason Volk
5768ca8442 upgrade dependency ByteSize
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 04:27:20 +00:00
Jason Volk
3f0f89cddb use async_trait without axum re-export
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 04:27:20 +00:00
Jason Volk
d3b65af616 remove several services.globals config wrappers
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 03:00:53 +00:00
Jason Volk
d60920c728 workaround some large type name length issues
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 03:00:53 +00:00
Jason Volk
db99d3a001 remove recently-made-unnecessary unsafe block
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-31 02:30:32 +00:00
Jason Volk
bee4c6255a reorg PduEvent strip tools and callsites
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-30 23:00:37 +00:00
Jason Volk
dc6e9e74d9 add spans for for jemalloc mallctl points
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-30 23:00:37 +00:00
Jason Volk
5bf5afaec8 instrument tokio before/after poll hooks
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-30 23:00:37 +00:00
Jason Volk
095734a8e7 bump tokio to 1.44.1
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-30 23:00:37 +00:00
Jason Volk
a93cb34dd6 disambiguate UInt/u64 type related in client/api/directory; use err macros.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-30 23:00:37 +00:00
Jason Volk
b03c493bf9 add stub for database benches
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-29 01:06:39 +00:00
Jason Volk
d0132706cd add --read-only and --maintenance program option
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-29 01:06:39 +00:00
Jason Volk
0e2009dbf5 fix client hierarchy loop condition
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-28 22:47:51 +00:00
Ginger
3e57b7d35d Update expected test results 2025-03-28 14:30:14 -04:00
Ginger
75b6daa67f Fix off-by-one error when fetching room hierarchy 2025-03-28 14:30:14 -04:00
June Clementine Strawberry
6365f1a887 remove sccache from ci for now
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-28 14:26:12 -04:00
Jason Volk
b2bf35cfab fix benches from state-res
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-28 09:01:46 +00:00
Jason Volk
7f448d88a4 use qualified crate names from within workspace
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-27 07:08:41 +00:00
Jason Volk
c99f5770a0 mark get_summary_and_children_federation Send
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-27 07:08:41 +00:00
Jason Volk
dfe058a244 default config item to 'none' when zstd_compression not featured
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-27 01:09:27 +00:00
Jason Volk
07ba00f74e abstract raw query command iterations
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 04:43:05 +00:00
Jason Volk
9d0ce3965e fix lints
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 02:25:54 +00:00
Jason Volk
d1b82ea225 use #[ignore] for todo'ed tests
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
23e3f6526f split well_known resolver into unit
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
8010505853 implement clear_cache() for resolver service
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
9ce95a7030 make service memory_usage()/clear_cache() async trait
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
d8ea8b378c add Map::clear() to db interface
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
17003ba773 add FIFO compaction for persistent-cache descriptor; comments/cleanup
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
a57336ec13 assume canonical order in db serialization test
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
7294368015 parallelize IO for PublicRoomsChunk vector
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
aa4d2e2363 fix unused import without feature jemalloc_conf
fix span passed by value

Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
Jason Volk
07ec9d6d85 re-sort pushkey_deviceid (33c5afe050)
Signed-off-by: Jason Volk <jason@zemos.net>
2025-03-26 01:33:41 +00:00
cy
33c5afe050
delete pushers created with different access token on password change 2025-03-21 10:34:17 -04:00
June Clementine Strawberry
7bf92c8a37
replace unnecessary check when updating device keys
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-17 23:02:34 -04:00
cy
658c19d55e check if we already have a more preferable key backup before adding 2025-03-16 18:23:19 -04:00
cy
4518f55408 guard against using someone else's access token in UIAA 2025-03-15 19:35:09 -04:00
June Clementine Strawberry
ee3c585555
skip a few flakey complement tests
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-15 19:14:45 -04:00
June Clementine Strawberry
6c29792b3d
respect include_leave syncv3 filter
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-13 15:49:40 -04:00
June Clementine Strawberry
258b399de9 bump ruwuma
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-13 15:23:10 -04:00
June Clementine Strawberry
5dea52f0f8
stop doing complement cert gen and just use self-signed cert
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-13 10:50:43 -04:00
June Clementine Strawberry
1d1ccec532 fix some nightly clippy lints
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-13 08:37:34 -04:00
June Clementine Strawberry
0877f29439 respect membership filters on /members
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-13 08:37:34 -04:00
June Clementine Strawberry
e920c44cb4
ignore humantime dep as tracing console-subscriber uses it (somewhere)
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-11 21:15:11 -04:00
June Clementine Strawberry
ae818d5b25 remove most of cargo test from engage as crane does that but with more caching
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-11 21:09:24 -04:00
June Clementine Strawberry
7f95eef9ab
bump ruwuma
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-11 21:09:19 -04:00
June Clementine Strawberry
3104586884
bump tracing-subscriber, allowlist cargo-doc lint in admin room
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-11 18:05:36 -04:00
Odd Eivind Ebbesen
c4b05e77f3
Fix up wording in the doc comments for admin media deletion (#694) 2025-03-10 17:28:29 -04:00
Ginger
1366a3092f
Check the room_types filter when searching for local public rooms (#698) 2025-03-10 17:28:19 -04:00
Tamara Schmitz
1e23c95ec6
docs: refactor reverse proxy setup sections (#701) 2025-03-10 17:27:53 -04:00
June Clementine Strawberry
56dba8acb7
misc docs updates
Signed-off-by: June Clementine Strawberry <june@3.dog>
2025-03-10 17:27:06 -04:00
196 changed files with 3835 additions and 2978 deletions

View file

@ -1,5 +1,5 @@
[advisories] [advisories]
ignore = ["RUSTSEC-2024-0436"] # advisory IDs to ignore e.g. ["RUSTSEC-2019-0001", ...] ignore = ["RUSTSEC-2024-0436", "RUSTSEC-2025-0014"] # advisory IDs to ignore e.g. ["RUSTSEC-2019-0001", ...]
informational_warnings = [] # warn for categories of informational advisories informational_warnings = [] # warn for categories of informational advisories
severity_threshold = "none" # CVSS severity ("none", "low", "medium", "high", "critical") severity_threshold = "none" # CVSS severity ("none", "low", "medium", "high", "critical")

View file

@ -21,16 +21,6 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
env: env:
# sccache only on main repo
SCCACHE_GHA_ENABLED: "${{ !startsWith(github.ref, 'refs/tags/') && (github.event.pull_request.draft != true) && (vars.DOCKER_USERNAME != '') && (vars.GITLAB_USERNAME != '') && (vars.SCCACHE_ENDPOINT != '') && (github.event.pull_request.user.login != 'renovate[bot]') && 'true' || 'false' }}"
RUSTC_WRAPPER: "${{ !startsWith(github.ref, 'refs/tags/') && (github.event.pull_request.draft != true) && (vars.DOCKER_USERNAME != '') && (vars.GITLAB_USERNAME != '') && (vars.SCCACHE_ENDPOINT != '') && (github.event.pull_request.user.login != 'renovate[bot]') && 'sccache' || '' }}"
SCCACHE_BUCKET: "${{ (github.event.pull_request.draft != true) && (vars.DOCKER_USERNAME != '') && (vars.GITLAB_USERNAME != '') && (vars.SCCACHE_ENDPOINT != '') && (github.event.pull_request.user.login != 'renovate[bot]') && 'sccache' || '' }}"
SCCACHE_S3_USE_SSL: ${{ vars.SCCACHE_S3_USE_SSL }}
SCCACHE_REGION: ${{ vars.SCCACHE_REGION }}
SCCACHE_ENDPOINT: ${{ vars.SCCACHE_ENDPOINT }}
SCCACHE_CACHE_MULTIARCH: ${{ vars.SCCACHE_CACHE_MULTIARCH }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Required to make some things output color # Required to make some things output color
TERM: ansi TERM: ansi
# Publishing to my nix binary cache # Publishing to my nix binary cache
@ -123,13 +113,6 @@ jobs:
bin/nix-build-and-cache just '.#devShells.x86_64-linux.all-features' bin/nix-build-and-cache just '.#devShells.x86_64-linux.all-features'
bin/nix-build-and-cache just '.#devShells.x86_64-linux.dynamic' bin/nix-build-and-cache just '.#devShells.x86_64-linux.dynamic'
# use sccache for Rust
- name: Run sccache-cache
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting
# releases and tags
#if: ${{ (env.SCCACHE_GHA_ENABLED == 'true') && !startsWith(github.ref, 'refs/tags/') }}
uses: mozilla-actions/sccache-action@main
# use rust-cache # use rust-cache
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting # we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting
@ -247,13 +230,6 @@ jobs:
direnv allow direnv allow
nix develop .#all-features --command true --impure nix develop .#all-features --command true --impure
# use sccache for Rust
- name: Run sccache-cache
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting
# releases and tags
#if: ${{ (env.SCCACHE_GHA_ENABLED == 'true') && !startsWith(github.ref, 'refs/tags/') }}
uses: mozilla-actions/sccache-action@main
# use rust-cache # use rust-cache
- uses: Swatinem/rust-cache@v2 - uses: Swatinem/rust-cache@v2
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting # we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting

1018
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -20,18 +20,18 @@ license = "Apache-2.0"
# See also `rust-toolchain.toml` # See also `rust-toolchain.toml`
readme = "README.md" readme = "README.md"
repository = "https://github.com/girlbossceo/conduwuit" repository = "https://github.com/girlbossceo/conduwuit"
rust-version = "1.85.0" rust-version = "1.86.0"
version = "0.5.0" version = "0.5.0"
[workspace.metadata.crane] [workspace.metadata.crane]
name = "conduwuit" name = "conduwuit"
[workspace.dependencies.arrayvec] [workspace.dependencies.arrayvec]
version = "0.7.4" version = "0.7.6"
features = ["serde"] features = ["serde"]
[workspace.dependencies.smallvec] [workspace.dependencies.smallvec]
version = "1.13.2" version = "1.14.0"
features = [ features = [
"const_generics", "const_generics",
"const_new", "const_new",
@ -45,7 +45,7 @@ version = "0.3"
features = ["ffi", "std", "union"] features = ["ffi", "std", "union"]
[workspace.dependencies.const-str] [workspace.dependencies.const-str]
version = "0.5.7" version = "0.6.2"
[workspace.dependencies.ctor] [workspace.dependencies.ctor]
version = "0.2.9" version = "0.2.9"
@ -81,13 +81,13 @@ version = "0.8.5"
# Used for the http request / response body type for Ruma endpoints used with reqwest # Used for the http request / response body type for Ruma endpoints used with reqwest
[workspace.dependencies.bytes] [workspace.dependencies.bytes]
version = "1.9.0" version = "1.10.1"
[workspace.dependencies.http-body-util] [workspace.dependencies.http-body-util]
version = "0.1.2" version = "0.1.3"
[workspace.dependencies.http] [workspace.dependencies.http]
version = "1.2.0" version = "1.3.1"
[workspace.dependencies.regex] [workspace.dependencies.regex]
version = "1.11.1" version = "1.11.1"
@ -111,7 +111,7 @@ default-features = false
features = ["typed-header", "tracing"] features = ["typed-header", "tracing"]
[workspace.dependencies.axum-server] [workspace.dependencies.axum-server]
version = "0.7.1" version = "0.7.2"
default-features = false default-features = false
# to listen on both HTTP and HTTPS if listening on TLS dierctly from conduwuit for complement or sytest # to listen on both HTTP and HTTPS if listening on TLS dierctly from conduwuit for complement or sytest
@ -122,7 +122,7 @@ version = "0.7"
version = "0.6.1" version = "0.6.1"
[workspace.dependencies.tower] [workspace.dependencies.tower]
version = "0.5.1" version = "0.5.2"
default-features = false default-features = false
features = ["util"] features = ["util"]
@ -141,12 +141,12 @@ features = [
] ]
[workspace.dependencies.rustls] [workspace.dependencies.rustls]
version = "0.23.19" version = "0.23.25"
default-features = false default-features = false
features = ["aws_lc_rs"] features = ["aws_lc_rs"]
[workspace.dependencies.reqwest] [workspace.dependencies.reqwest]
version = "0.12.9" version = "0.12.15"
default-features = false default-features = false
features = [ features = [
"rustls-tls-native-roots", "rustls-tls-native-roots",
@ -156,12 +156,12 @@ features = [
] ]
[workspace.dependencies.serde] [workspace.dependencies.serde]
version = "1.0.216" version = "1.0.219"
default-features = false default-features = false
features = ["rc"] features = ["rc"]
[workspace.dependencies.serde_json] [workspace.dependencies.serde_json]
version = "1.0.133" version = "1.0.140"
default-features = false default-features = false
features = ["raw_value"] features = ["raw_value"]
@ -204,13 +204,13 @@ features = [
# logging # logging
[workspace.dependencies.log] [workspace.dependencies.log]
version = "0.4.22" version = "0.4.27"
default-features = false default-features = false
[workspace.dependencies.tracing] [workspace.dependencies.tracing]
version = "0.1.41" version = "0.1.41"
default-features = false default-features = false
[workspace.dependencies.tracing-subscriber] [workspace.dependencies.tracing-subscriber]
version = "=0.3.18" version = "0.3.19"
default-features = false default-features = false
features = ["env-filter", "std", "tracing", "tracing-log", "ansi", "fmt"] features = ["env-filter", "std", "tracing", "tracing-log", "ansi", "fmt"]
[workspace.dependencies.tracing-core] [workspace.dependencies.tracing-core]
@ -224,7 +224,7 @@ default-features = false
# used for conduwuit's CLI and admin room command parsing # used for conduwuit's CLI and admin room command parsing
[workspace.dependencies.clap] [workspace.dependencies.clap]
version = "4.5.23" version = "4.5.35"
default-features = false default-features = false
features = [ features = [
"derive", "derive",
@ -237,12 +237,12 @@ features = [
] ]
[workspace.dependencies.futures] [workspace.dependencies.futures]
version = "0.3.30" version = "0.3.31"
default-features = false default-features = false
features = ["std", "async-await"] features = ["std", "async-await"]
[workspace.dependencies.tokio] [workspace.dependencies.tokio]
version = "1.42.0" version = "1.44.2"
default-features = false default-features = false
features = [ features = [
"fs", "fs",
@ -275,7 +275,7 @@ features = ["alloc", "std"]
default-features = false default-features = false
[workspace.dependencies.hyper] [workspace.dependencies.hyper]
version = "1.5.1" version = "1.6.0"
default-features = false default-features = false
features = [ features = [
"server", "server",
@ -284,8 +284,7 @@ features = [
] ]
[workspace.dependencies.hyper-util] [workspace.dependencies.hyper-util]
# hyper-util >=0.1.9 seems to have DNS issues version = "0.1.11"
version = "=0.1.8"
default-features = false default-features = false
features = [ features = [
"server-auto", "server-auto",
@ -295,7 +294,7 @@ features = [
# to support multiple variations of setting a config option # to support multiple variations of setting a config option
[workspace.dependencies.either] [workspace.dependencies.either]
version = "1.13.0" version = "1.15.0"
default-features = false default-features = false
features = ["serde"] features = ["serde"]
@ -306,17 +305,22 @@ default-features = false
features = ["env", "toml"] features = ["env", "toml"]
[workspace.dependencies.hickory-resolver] [workspace.dependencies.hickory-resolver]
version = "0.24.2" version = "0.25.1"
default-features = false default-features = false
features = [
"serde",
"system-config",
"tokio",
]
# Used for conduwuit::Error type # Used for conduwuit::Error type
[workspace.dependencies.thiserror] [workspace.dependencies.thiserror]
version = "2.0.7" version = "2.0.12"
default-features = false default-features = false
# Used when hashing the state # Used when hashing the state
[workspace.dependencies.ring] [workspace.dependencies.ring]
version = "0.17.8" version = "0.17.14"
default-features = false default-features = false
# Used to make working with iterators easier, was already a transitive depdendency # Used to make working with iterators easier, was already a transitive depdendency
@ -337,7 +341,7 @@ version = "0.4.0"
version = "2.3.1" version = "2.3.1"
[workspace.dependencies.async-trait] [workspace.dependencies.async-trait]
version = "0.1.83" version = "0.1.88"
[workspace.dependencies.lru-cache] [workspace.dependencies.lru-cache]
version = "0.1.2" version = "0.1.2"
@ -346,7 +350,7 @@ version = "0.1.2"
[workspace.dependencies.ruma] [workspace.dependencies.ruma]
git = "https://github.com/girlbossceo/ruwuma" git = "https://github.com/girlbossceo/ruwuma"
#branch = "conduwuit-changes" #branch = "conduwuit-changes"
rev = "f5ab6302aaa55a14827a9cb5b40e980dd135fe14" rev = "920148dca1076454ca0ca5d43b5ce1aa708381d4"
features = [ features = [
"compat", "compat",
"rand", "rand",
@ -423,7 +427,7 @@ features = ["rt-tokio"]
# optional sentry metrics for crash/panic reporting # optional sentry metrics for crash/panic reporting
[workspace.dependencies.sentry] [workspace.dependencies.sentry]
version = "0.35.0" version = "0.37.0"
default-features = false default-features = false
features = [ features = [
"backtrace", "backtrace",
@ -439,9 +443,9 @@ features = [
] ]
[workspace.dependencies.sentry-tracing] [workspace.dependencies.sentry-tracing]
version = "0.35.0" version = "0.37.0"
[workspace.dependencies.sentry-tower] [workspace.dependencies.sentry-tower]
version = "0.35.0" version = "0.37.0"
# jemalloc usage # jemalloc usage
[workspace.dependencies.tikv-jemalloc-sys] [workspace.dependencies.tikv-jemalloc-sys]
@ -475,7 +479,7 @@ default-features = false
features = ["resource"] features = ["resource"]
[workspace.dependencies.sd-notify] [workspace.dependencies.sd-notify]
version = "0.4.3" version = "0.4.5"
default-features = false default-features = false
[workspace.dependencies.hardened_malloc-rs] [workspace.dependencies.hardened_malloc-rs]
@ -492,25 +496,25 @@ version = "0.4.3"
default-features = false default-features = false
[workspace.dependencies.termimad] [workspace.dependencies.termimad]
version = "0.31.1" version = "0.31.2"
default-features = false default-features = false
[workspace.dependencies.checked_ops] [workspace.dependencies.checked_ops]
version = "0.1" version = "0.1"
[workspace.dependencies.syn] [workspace.dependencies.syn]
version = "2.0.90" version = "2.0"
default-features = false default-features = false
features = ["full", "extra-traits"] features = ["full", "extra-traits"]
[workspace.dependencies.quote] [workspace.dependencies.quote]
version = "1.0.37" version = "1.0"
[workspace.dependencies.proc-macro2] [workspace.dependencies.proc-macro2]
version = "1.0.89" version = "1.0"
[workspace.dependencies.bytesize] [workspace.dependencies.bytesize]
version = "1.3.2" version = "2.0"
[workspace.dependencies.core_affinity] [workspace.dependencies.core_affinity]
version = "0.8.1" version = "0.8.1"
@ -522,11 +526,11 @@ version = "0.2"
version = "0.2" version = "0.2"
[workspace.dependencies.minicbor] [workspace.dependencies.minicbor]
version = "0.25.1" version = "0.26.3"
features = ["std"] features = ["std"]
[workspace.dependencies.minicbor-serde] [workspace.dependencies.minicbor-serde]
version = "0.3.2" version = "0.4.1"
features = ["std"] features = ["std"]
[workspace.dependencies.maplit] [workspace.dependencies.maplit]
@ -541,16 +545,16 @@ version = "1.0.2"
# https://github.com/girlbossceo/tracing/commit/b348dca742af641c47bc390261f60711c2af573c # https://github.com/girlbossceo/tracing/commit/b348dca742af641c47bc390261f60711c2af573c
[patch.crates-io.tracing-subscriber] [patch.crates-io.tracing-subscriber]
git = "https://github.com/girlbossceo/tracing" git = "https://github.com/girlbossceo/tracing"
rev = "05825066a6d0e9ad6b80dcf29457eb179ff4768c" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing] [patch.crates-io.tracing]
git = "https://github.com/girlbossceo/tracing" git = "https://github.com/girlbossceo/tracing"
rev = "05825066a6d0e9ad6b80dcf29457eb179ff4768c" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing-core] [patch.crates-io.tracing-core]
git = "https://github.com/girlbossceo/tracing" git = "https://github.com/girlbossceo/tracing"
rev = "05825066a6d0e9ad6b80dcf29457eb179ff4768c" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing-log] [patch.crates-io.tracing-log]
git = "https://github.com/girlbossceo/tracing" git = "https://github.com/girlbossceo/tracing"
rev = "05825066a6d0e9ad6b80dcf29457eb179ff4768c" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
# adds a tab completion callback: https://github.com/girlbossceo/rustyline-async/commit/de26100b0db03e419a3d8e1dd26895d170d1fe50 # adds a tab completion callback: https://github.com/girlbossceo/rustyline-async/commit/de26100b0db03e419a3d8e1dd26895d170d1fe50
# adds event for CTRL+\: https://github.com/girlbossceo/rustyline-async/commit/67d8c49aeac03a5ef4e818f663eaa94dd7bf339b # adds event for CTRL+\: https://github.com/girlbossceo/rustyline-async/commit/67d8c49aeac03a5ef4e818f663eaa94dd7bf339b
@ -566,10 +570,23 @@ rev = "fe4aebeeaae435af60087ddd56b573a2e0be671d"
git = "https://github.com/girlbossceo/async-channel" git = "https://github.com/girlbossceo/async-channel"
rev = "92e5e74063bf2a3b10414bcc8a0d68b235644280" rev = "92e5e74063bf2a3b10414bcc8a0d68b235644280"
# adds affinity masks for selecting more than one core at a time
[patch.crates-io.core_affinity] [patch.crates-io.core_affinity]
git = "https://github.com/girlbossceo/core_affinity_rs" git = "https://github.com/girlbossceo/core_affinity_rs"
rev = "9c8e51510c35077df888ee72a36b4b05637147da" rev = "9c8e51510c35077df888ee72a36b4b05637147da"
# reverts hyperium#148 conflicting with our delicate federation resolver hooks
[patch.crates-io.hyper-util]
git = "https://github.com/girlbossceo/hyper-util"
rev = "e4ae7628fe4fcdacef9788c4c8415317a4489941"
# allows no-aaaa option in resolv.conf
# bumps rust edition and toolchain to 1.86.0 and 2024
# use sat_add on line number errors
[patch.crates-io.resolv-conf]
git = "https://github.com/girlbossceo/resolv-conf"
rev = "200e958941d522a70c5877e3d846f55b5586c68d"
# #
# Our crates # Our crates
# #
@ -841,6 +858,9 @@ unused_crate_dependencies = "allow"
unsafe_code = "allow" unsafe_code = "allow"
variant_size_differences = "allow" variant_size_differences = "allow"
# we check nightly clippy lints
unknown_lints = "allow"
####################################### #######################################
# #
# Clippy lints # Clippy lints
@ -889,6 +909,7 @@ needless_continue = { level = "allow", priority = 1 }
no_effect_underscore_binding = { level = "allow", priority = 1 } no_effect_underscore_binding = { level = "allow", priority = 1 }
similar_names = { level = "allow", priority = 1 } similar_names = { level = "allow", priority = 1 }
single_match_else = { level = "allow", priority = 1 } single_match_else = { level = "allow", priority = 1 }
struct_excessive_bools = { level = "allow", priority = 1 }
struct_field_names = { level = "allow", priority = 1 } struct_field_names = { level = "allow", priority = 1 }
unnecessary_wraps = { level = "allow", priority = 1 } unnecessary_wraps = { level = "allow", priority = 1 }
unused_async = { level = "allow", priority = 1 } unused_async = { level = "allow", priority = 1 }

View file

@ -18,7 +18,7 @@ RESULTS_FILE="${3:-complement_test_results.jsonl}"
COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-complement-conduwuit:main}" COMPLEMENT_BASE_IMAGE="${COMPLEMENT_BASE_IMAGE:-complement-conduwuit:main}"
# Complement tests that are skipped due to flakiness/reliability issues or we don't implement such features and won't for a long time # Complement tests that are skipped due to flakiness/reliability issues or we don't implement such features and won't for a long time
#SKIPPED_COMPLEMENT_TESTS='-skip=TestPartialStateJoin.*' SKIPPED_COMPLEMENT_TESTS='TestPartialStateJoin.*|TestRoomDeleteAlias/Parallel/Regular_users_can_add_and_delete_aliases_when_m.*|TestRoomDeleteAlias/Parallel/Can_delete_canonical_alias|TestUnbanViaInvite.*|TestRoomState/Parallel/GET_/publicRooms_lists.*"|TestRoomDeleteAlias/Parallel/Users_with_sufficient_power-level_can_delete_other.*'
# $COMPLEMENT_SRC needs to be a directory to Complement source code # $COMPLEMENT_SRC needs to be a directory to Complement source code
if [ -f "$COMPLEMENT_SRC" ]; then if [ -f "$COMPLEMENT_SRC" ]; then
@ -68,7 +68,7 @@ set +o pipefail
env \ env \
-C "$COMPLEMENT_SRC" \ -C "$COMPLEMENT_SRC" \
COMPLEMENT_BASE_IMAGE="$COMPLEMENT_BASE_IMAGE" \ COMPLEMENT_BASE_IMAGE="$COMPLEMENT_BASE_IMAGE" \
go test -tags="conduwuit_blacklist" -timeout 1h -json ./tests/... | tee "$LOG_FILE" go test -tags="conduwuit_blacklist" -skip="$SKIPPED_COMPLEMENT_TESTS" -v -timeout 1h -json ./tests/... | tee "$LOG_FILE"
set -o pipefail set -o pipefail
# Post-process the results into an easy-to-compare format, sorted by Test name for reproducible results # Post-process the results into an easy-to-compare format, sorted by Test name for reproducible results

View file

@ -2,9 +2,10 @@ array-size-threshold = 4096
cognitive-complexity-threshold = 94 # TODO reduce me ALARA cognitive-complexity-threshold = 94 # TODO reduce me ALARA
excessive-nesting-threshold = 11 # TODO reduce me to 4 or 5 excessive-nesting-threshold = 11 # TODO reduce me to 4 or 5
future-size-threshold = 7745 # TODO reduce me ALARA future-size-threshold = 7745 # TODO reduce me ALARA
stack-size-threshold = 196608 # reduce me ALARA stack-size-threshold = 196608 # TODO reduce me ALARA
too-many-lines-threshold = 780 # TODO reduce me to <= 100 too-many-lines-threshold = 780 # TODO reduce me to <= 100
type-complexity-threshold = 250 # reduce me to ~200 type-complexity-threshold = 250 # reduce me to ~200
large-error-threshold = 256 # TODO reduce me ALARA
disallowed-macros = [ disallowed-macros = [
{ path = "log::error", reason = "use conduwuit_core::error" }, { path = "log::error", reason = "use conduwuit_core::error" },

View file

@ -195,14 +195,6 @@
# #
#servernameevent_data_cache_capacity = varies by system #servernameevent_data_cache_capacity = varies by system
# This item is undocumented. Please contribute documentation for it.
#
#server_visibility_cache_capacity = varies by system
# This item is undocumented. Please contribute documentation for it.
#
#user_visibility_cache_capacity = varies by system
# This item is undocumented. Please contribute documentation for it. # This item is undocumented. Please contribute documentation for it.
# #
#stateinfo_cache_capacity = varies by system #stateinfo_cache_capacity = varies by system
@ -535,9 +527,9 @@
# Default room version conduwuit will create rooms with. # Default room version conduwuit will create rooms with.
# #
# Per spec, room version 10 is the default. # Per spec, room version 11 is the default.
# #
#default_room_version = 10 #default_room_version = 11
# This item is undocumented. Please contribute documentation for it. # This item is undocumented. Please contribute documentation for it.
# #
@ -602,7 +594,7 @@
# Currently, conduwuit doesn't support inbound batched key requests, so # Currently, conduwuit doesn't support inbound batched key requests, so
# this list should only contain other Synapse servers. # this list should only contain other Synapse servers.
# #
# example: ["matrix.org", "envs.net", "tchncs.de"] # example: ["matrix.org", "tchncs.de"]
# #
#trusted_servers = ["matrix.org"] #trusted_servers = ["matrix.org"]
@ -1194,13 +1186,16 @@
# #
#prune_missing_media = false #prune_missing_media = false
# Vector list of servers that conduwuit will refuse to download remote # Vector list of regex patterns of server names that conduwuit will refuse
# media from. # to download remote media from.
#
# example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
# #
#prevent_media_downloads_from = [] #prevent_media_downloads_from = []
# List of forbidden server names that we will block incoming AND outgoing # List of forbidden server names via regex patterns that we will block
# federation with, and block client room joins / remote user invites. # incoming AND outgoing federation with, and block client room joins /
# remote user invites.
# #
# This check is applied on the room ID, room alias, sender server name, # This check is applied on the room ID, room alias, sender server name,
# sender user's server name, inbound federation X-Matrix origin, and # sender user's server name, inbound federation X-Matrix origin, and
@ -1208,11 +1203,15 @@
# #
# Basically "global" ACLs. # Basically "global" ACLs.
# #
# example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
#
#forbidden_remote_server_names = [] #forbidden_remote_server_names = []
# List of forbidden server names that we will block all outgoing federated # List of forbidden server names via regex patterns that we will block all
# room directory requests for. Useful for preventing our users from # outgoing federated room directory requests for. Useful for preventing
# wandering into bad servers or spaces. # our users from wandering into bad servers or spaces.
#
# example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
# #
#forbidden_remote_room_directory_server_names = [] #forbidden_remote_room_directory_server_names = []
@ -1323,7 +1322,7 @@
# used, and startup as warnings if any room aliases in your database have # used, and startup as warnings if any room aliases in your database have
# a forbidden room alias/ID. # a forbidden room alias/ID.
# #
# example: ["19dollarfortnitecards", "b[4a]droom"] # example: ["19dollarfortnitecards", "b[4a]droom", "badphrase"]
# #
#forbidden_alias_names = [] #forbidden_alias_names = []
@ -1336,7 +1335,7 @@
# startup as warnings if any local users in your database have a forbidden # startup as warnings if any local users in your database have a forbidden
# username. # username.
# #
# example: ["administrator", "b[a4]dusernam[3e]"] # example: ["administrator", "b[a4]dusernam[3e]", "badphrase"]
# #
#forbidden_usernames = [] #forbidden_usernames = []

View file

@ -1,7 +1,6 @@
# Summary # Summary
- [Introduction](introduction.md) - [Introduction](introduction.md)
- [Differences from upstream Conduit](differences.md)
- [Configuration](configuration.md) - [Configuration](configuration.md)
- [Examples](configuration/examples.md) - [Examples](configuration/examples.md)
- [Deploying](deploying.md) - [Deploying](deploying.md)

View file

@ -145,25 +145,32 @@ sudo chmod 700 /var/lib/conduwuit/
## Setting up the Reverse Proxy ## Setting up the Reverse Proxy
Refer to the documentation or various guides online of your chosen reverse proxy We recommend Caddy as a reverse proxy, as it is trivial to use, handling TLS certificates, reverse proxy headers, etc transparently with proper defaults.
software. There are many examples of basic Apache/Nginx reverse proxy setups For other software, please refer to their respective documentation or online guides.
out there.
A [Caddy](https://caddyserver.com/) example will be provided as this ### Caddy
is the recommended reverse proxy for new users and is very trivial to use
(handles TLS, reverse proxy headers, etc transparently with proper defaults).
Lighttpd is not supported as it seems to mess with the `X-Matrix` Authorization After installing Caddy via your preferred method, create `/etc/caddy/conf.d/conduwuit_caddyfile`
header, making federation non-functional. If a workaround is found, feel free to share to get it added to the documentation here. and enter this (substitute for your server name).
If using Apache, you need to use `nocanon` in your `ProxyPass` directive to prevent this (note that Apache isn't very good as a general reverse proxy and we discourage the usage of it if you can). ```caddyfile
your.server.name, your.server.name:8448 {
# TCP reverse_proxy
reverse_proxy 127.0.0.1:6167
# UNIX socket
#reverse_proxy unix//run/conduwuit/conduwuit.sock
}
```
If using Nginx, you need to give conduwuit the request URI using `$request_uri`, or like so: That's it! Just start and enable the service and you're set.
- `proxy_pass http://127.0.0.1:6167$request_uri;`
- `proxy_pass http://127.0.0.1:6167;`
Nginx users need to increase `client_max_body_size` (default is 1M) to match ```bash
`max_request_size` defined in conduwuit.toml. sudo systemctl enable --now caddy
```
### Other Reverse Proxies
As we would prefer our users to use Caddy, we will not provide configuration files for other proxys.
You will need to reverse proxy everything under following routes: You will need to reverse proxy everything under following routes:
- `/_matrix/` - core Matrix C-S and S-S APIs - `/_matrix/` - core Matrix C-S and S-S APIs
@ -186,25 +193,19 @@ Examples of delegation:
- <https://puppygock.gay/.well-known/matrix/server> - <https://puppygock.gay/.well-known/matrix/server>
- <https://puppygock.gay/.well-known/matrix/client> - <https://puppygock.gay/.well-known/matrix/client>
### Caddy For Apache and Nginx there are many examples available online.
Create `/etc/caddy/conf.d/conduwuit_caddyfile` and enter this (substitute for Lighttpd is not supported as it seems to mess with the `X-Matrix` Authorization
your server name). header, making federation non-functional. If a workaround is found, feel free to share to get it added to the documentation here.
```caddyfile If using Apache, you need to use `nocanon` in your `ProxyPass` directive to prevent httpd from messing with the `X-Matrix` header (note that Apache isn't very good as a general reverse proxy and we discourage the usage of it if you can).
your.server.name, your.server.name:8448 {
# TCP reverse_proxy
reverse_proxy 127.0.0.1:6167
# UNIX socket
#reverse_proxy unix//run/conduwuit/conduwuit.sock
}
```
That's it! Just start and enable the service and you're set. If using Nginx, you need to give conduwuit the request URI using `$request_uri`, or like so:
- `proxy_pass http://127.0.0.1:6167$request_uri;`
- `proxy_pass http://127.0.0.1:6167;`
```bash Nginx users need to increase `client_max_body_size` (default is 1M) to match
sudo systemctl enable --now caddy `max_request_size` defined in conduwuit.toml.
```
## You're done ## You're done

View file

@ -4,10 +4,6 @@
{{#include ../README.md:body}} {{#include ../README.md:body}}
#### What's different about your fork than upstream Conduit?
See the [differences](differences.md) page
#### How can I deploy my own? #### How can I deploy my own?
- [Deployment options](deploying.md) - [Deployment options](deploying.md)

View file

@ -161,24 +161,6 @@ name = "markdownlint"
group = "lints" group = "lints"
script = "markdownlint docs *.md || true" # TODO: fix the ton of markdown lints so we can drop `|| true` script = "markdownlint docs *.md || true" # TODO: fix the ton of markdown lints so we can drop `|| true`
[[task]]
name = "cargo/all"
group = "tests"
script = """
env DIRENV_DEVSHELL=all-features \
direnv exec . \
cargo test \
--workspace \
--locked \
--profile test \
--all-targets \
--no-fail-fast \
--all-features \
--color=always \
-- \
--color=always
"""
[[task]] [[task]]
name = "cargo/default" name = "cargo/default"
group = "tests" group = "tests"
@ -196,24 +178,6 @@ env DIRENV_DEVSHELL=default \
--color=always --color=always
""" """
[[task]]
name = "cargo/no-features"
group = "tests"
script = """
env DIRENV_DEVSHELL=no-features \
direnv exec . \
cargo test \
--workspace \
--locked \
--profile test \
--all-targets \
--no-fail-fast \
--no-default-features \
--color=always \
-- \
--color=always
"""
# Checks if the generated example config differs from the checked in repo's # Checks if the generated example config differs from the checked in repo's
# example config. # example config.
[[task]] [[task]]

6
flake.lock generated
View file

@ -80,11 +80,11 @@
"complement": { "complement": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1741378155, "lastModified": 1741891349,
"narHash": "sha256-rJSfqf3q4oWxcAwENtAowLZeCi8lktwKVH9XQvvZR64=", "narHash": "sha256-YvrzOWcX7DH1drp5SGa+E/fc7wN3hqFtPbqPjZpOu1Q=",
"owner": "girlbossceo", "owner": "girlbossceo",
"repo": "complement", "repo": "complement",
"rev": "1502a00d8551d0f6e8954a23e43868877c3e57d9", "rev": "e587b3df569cba411aeac7c20b6366d03c143745",
"type": "github" "type": "github"
}, },
"original": { "original": {

View file

@ -26,7 +26,7 @@
file = ./rust-toolchain.toml; file = ./rust-toolchain.toml;
# See also `rust-toolchain.toml` # See also `rust-toolchain.toml`
sha256 = "sha256-AJ6LX/Q/Er9kS15bn9iflkUwcgYqRQxiOIL2ToVAXaU="; sha256 = "sha256-X/4ZBHO3iW0fOenQ3foEvscgAPJYl2abspaBThDOukI=";
}; };
mkScope = pkgs: pkgs.lib.makeScope pkgs.newScope (self: { mkScope = pkgs: pkgs.lib.makeScope pkgs.newScope (self: {

View file

@ -0,0 +1,21 @@
-----BEGIN CERTIFICATE-----
MIIDfzCCAmegAwIBAgIUcrZdSPmCh33Evys/U6mTPpShqdcwDQYJKoZIhvcNAQEL
BQAwPzELMAkGA1UEBhMCNjkxCzAJBgNVBAgMAjQyMRUwEwYDVQQKDAx3b29mZXJz
IGluYy4xDDAKBgNVBAMMA2hzMTAgFw0yNTAzMTMxMjU4NTFaGA8yMDUyMDcyODEy
NTg1MVowPzELMAkGA1UEBhMCNjkxCzAJBgNVBAgMAjQyMRUwEwYDVQQKDAx3b29m
ZXJzIGluYy4xDDAKBgNVBAMMA2hzMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC
AQoCggEBANL+h2ZmK/FqN5uLJPtIy6Feqcyb6EX7MQBEtxuJ56bTAbjHuCLZLpYt
/wOWJ91drHqZ7Xd5iTisGdMu8YS803HSnHkzngf4VXKhVrdzW2YDrpZRxmOhtp88
awOHmP7mqlJyBbCOQw8aDVrT0KmEIWzA7g+nFRQ5Ff85MaP+sQrHGKZbo61q8HBp
L0XuaqNckruUKtxnEqrm5xx5sYyYKg7rrSFE5JMFoWKB1FNWJxyWT42BhGtnJZsK
K5c+NDSOU4TatxoN6mpNSBpCz/a11PiQHMEfqRk6JA4g3911dqPTfZBevUdBh8gl
8maIzqeZGhvyeKTmull1Y0781yyuj98CAwEAAaNxMG8wCQYDVR0TBAIwADALBgNV
HQ8EBAMCBPAwNgYDVR0RBC8wLYIRKi5kb2NrZXIuaW50ZXJuYWyCA2hzMYIDaHMy
ggNoczOCA2hzNIcEfwAAATAdBgNVHQ4EFgQUr4VYrmW1d+vjBTJewvy7fJYhLDYw
DQYJKoZIhvcNAQELBQADggEBADkYqkjNYxjWX8hUUAmFHNdCwzT1CpYe/5qzLiyJ
irDSdMlC5g6QqMUSrpu7nZxo1lRe1dXGroFVfWpoDxyCjSQhplQZgtYqtyLfOIx+
HQ7cPE/tUU/KsTGc0aL61cETB6u8fj+rQKUGdfbSlm0Rpu4v0gC8RnDj06X/hZ7e
VkWU+dOBzxlqHuLlwFFtVDgCyyTatIROx5V+GpMHrVqBPO7HcHhwqZ30k2kMM8J3
y1CWaliQM85jqtSZV+yUHKQV8EksSowCFJuguf+Ahz0i0/koaI3i8m4MRN/1j13d
jbTaX5a11Ynm3A27jioZdtMRty6AJ88oCp18jxVzqTxNNO4=
-----END CERTIFICATE-----

View file

@ -6,7 +6,7 @@ allow_public_room_directory_over_federation = true
allow_public_room_directory_without_auth = true allow_public_room_directory_without_auth = true
allow_registration = true allow_registration = true
database_path = "/database" database_path = "/database"
log = "trace,h2=warn,hyper=warn" log = "trace,h2=debug,hyper=debug"
port = [8008, 8448] port = [8008, 8448]
trusted_servers = [] trusted_servers = []
only_query_trusted_key_servers = false only_query_trusted_key_servers = false
@ -19,11 +19,11 @@ url_preview_domain_explicit_denylist = ["*"]
media_compat_file_link = false media_compat_file_link = false
media_startup_check = true media_startup_check = true
prune_missing_media = true prune_missing_media = true
log_colors = false log_colors = true
admin_room_notices = false admin_room_notices = false
allow_check_for_updates = false allow_check_for_updates = false
intentionally_unknown_config_option_for_testing = true intentionally_unknown_config_option_for_testing = true
rocksdb_log_level = "debug" rocksdb_log_level = "info"
rocksdb_max_log_files = 1 rocksdb_max_log_files = 1
rocksdb_recovery_mode = 0 rocksdb_recovery_mode = 0
rocksdb_paranoid_file_checks = true rocksdb_paranoid_file_checks = true

View file

@ -3,10 +3,8 @@
, buildEnv , buildEnv
, coreutils , coreutils
, dockerTools , dockerTools
, gawk
, lib , lib
, main , main
, openssl
, stdenv , stdenv
, tini , tini
, writeShellScriptBin , writeShellScriptBin
@ -42,21 +40,6 @@ let
start = writeShellScriptBin "start" '' start = writeShellScriptBin "start" ''
set -euxo pipefail set -euxo pipefail
cp ${./v3.ext} /complement/v3.ext
echo "DNS.1 = $SERVER_NAME" >> /complement/v3.ext
echo "IP.1 = $(${lib.getExe gawk} 'END{print $1}' /etc/hosts)" \
>> /complement/v3.ext
${lib.getExe openssl} x509 \
-req \
-extfile /complement/v3.ext \
-in ${./signing_request.csr} \
-CA /complement/ca/ca.crt \
-CAkey /complement/ca/ca.key \
-CAcreateserial \
-out /complement/certificate.crt \
-days 1 \
-sha256
${lib.getExe' coreutils "env"} \ ${lib.getExe' coreutils "env"} \
CONDUWUIT_SERVER_NAME="$SERVER_NAME" \ CONDUWUIT_SERVER_NAME="$SERVER_NAME" \
${lib.getExe main'} ${lib.getExe main'}
@ -93,7 +76,7 @@ dockerTools.buildImage {
Env = [ Env = [
"CONDUWUIT_TLS__KEY=${./private_key.key}" "CONDUWUIT_TLS__KEY=${./private_key.key}"
"CONDUWUIT_TLS__CERTS=/complement/certificate.crt" "CONDUWUIT_TLS__CERTS=${./certificate.crt}"
"CONDUWUIT_CONFIG=${./config.toml}" "CONDUWUIT_CONFIG=${./config.toml}"
"RUST_BACKTRACE=full" "RUST_BACKTRACE=full"
]; ];

View file

@ -1,16 +1,16 @@
-----BEGIN CERTIFICATE REQUEST----- -----BEGIN CERTIFICATE REQUEST-----
MIICkTCCAXkCAQAwTDELMAkGA1UEBhMCNjkxCzAJBgNVBAgMAjQyMRYwFAYDVQQK MIIChDCCAWwCAQAwPzELMAkGA1UEBhMCNjkxCzAJBgNVBAgMAjQyMRUwEwYDVQQK
DA13b29mZXJzLCBpbmMuMRgwFgYDVQQDDA9jb21wbGVtZW50LW9ubHkwggEiMA0G DAx3b29mZXJzIGluYy4xDDAKBgNVBAMMA2hzMTCCASIwDQYJKoZIhvcNAQEBBQAD
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDS/odmZivxajebiyT7SMuhXqnMm+hF ggEPADCCAQoCggEBANL+h2ZmK/FqN5uLJPtIy6Feqcyb6EX7MQBEtxuJ56bTAbjH
+zEARLcbieem0wG4x7gi2S6WLf8DlifdXax6me13eYk4rBnTLvGEvNNx0px5M54H uCLZLpYt/wOWJ91drHqZ7Xd5iTisGdMu8YS803HSnHkzngf4VXKhVrdzW2YDrpZR
+FVyoVa3c1tmA66WUcZjobafPGsDh5j+5qpScgWwjkMPGg1a09CphCFswO4PpxUU xmOhtp88awOHmP7mqlJyBbCOQw8aDVrT0KmEIWzA7g+nFRQ5Ff85MaP+sQrHGKZb
ORX/OTGj/rEKxximW6OtavBwaS9F7mqjXJK7lCrcZxKq5uccebGMmCoO660hROST o61q8HBpL0XuaqNckruUKtxnEqrm5xx5sYyYKg7rrSFE5JMFoWKB1FNWJxyWT42B
BaFigdRTVicclk+NgYRrZyWbCiuXPjQ0jlOE2rcaDepqTUgaQs/2tdT4kBzBH6kZ hGtnJZsKK5c+NDSOU4TatxoN6mpNSBpCz/a11PiQHMEfqRk6JA4g3911dqPTfZBe
OiQOIN/ddXaj032QXr1HQYfIJfJmiM6nmRob8nik5rpZdWNO/Ncsro/fAgMBAAGg vUdBh8gl8maIzqeZGhvyeKTmull1Y0781yyuj98CAwEAAaAAMA0GCSqGSIb3DQEB
ADANBgkqhkiG9w0BAQsFAAOCAQEAjW+aD4E0phtRT5b2RyedY1uiSe7LQECsQnIO CwUAA4IBAQDR/gjfxN0IID1MidyhZB4qpdWn3m6qZnEQqoTyHHdWalbfNXcALC79
wUSyGGG1GXYlJscyxxyzE9W9+QIALrxZkmc/+e02u+bFb1zQXW/uB/7u7FgXzrj6 ffS+Smx40N5hEPvqy6euR89N5YuYvt8Hs+j7aWNBn7Wus5Favixcm2JcfCTJn2R3
2YSDiWYXiYKvgGWEfCi3lpcTJK9x6WWkR+iREaoKRjcl0ynhhGuR7YwP38TNyu+z r8FefuSs2xGkoyGsPFFcXE13SP/9zrZiwvOgSIuTdz/Pbh6GtEx7aV4DqHJsrXnb
FN6B1Lo398fvJkaTCiiHngWiwztXZ2d0MxkicuwZ1LJhIQA72OTl3QoRb5uiqbze XuPxpQleoBqKvQgSlmaEBsJg13TQB+Fl2foBVUtqAFDQiv+RIuircf0yesMCKJaK
T9QJfU6W3v8cB8c8PuKMv5gl1QsGNtlfyQB56/X0cMxWl25vWXd2ankLkAGRTDJ8 MPH4Oo+r3pR8lI8ewfJPreRhCoV+XrGYMubaakz003TJ1xlOW8M+N9a6eFyMVh76
9YZHxP1ki4/yh75AknFq02nCOsmxYrAazCYgP2TzIPhQwBurKQ== U1nY/KP8Ua6Lgaj9PRz7JCRzNoshZID/
-----END CERTIFICATE REQUEST----- -----END CERTIFICATE REQUEST-----

View file

@ -4,3 +4,9 @@ keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names subjectAltName = @alt_names
[alt_names] [alt_names]
DNS.1 = *.docker.internal
DNS.2 = hs1
DNS.3 = hs2
DNS.4 = hs3
DNS.5 = hs4
IP.1 = 127.0.0.1

View file

@ -9,7 +9,7 @@
# If you're having trouble making the relevant changes, bug a maintainer. # If you're having trouble making the relevant changes, bug a maintainer.
[toolchain] [toolchain]
channel = "1.85.0" channel = "1.86.0"
profile = "minimal" profile = "minimal"
components = [ components = [
# For rust-analyzer # For rust-analyzer

View file

@ -6,7 +6,9 @@ use std::{
}; };
use conduwuit::{ use conduwuit::{
Error, PduEvent, PduId, RawPduId, Result, debug_error, err, info, trace, utils, Error, Result, debug_error, err, info,
matrix::pdu::{PduEvent, PduId, RawPduId},
trace, utils,
utils::{ utils::{
stream::{IterStream, ReadyExt}, stream::{IterStream, ReadyExt},
string::EMPTY, string::EMPTY,

View file

@ -1,3 +1,4 @@
#![allow(rustdoc::broken_intra_doc_links)]
mod commands; mod commands;
use clap::Subcommand; use clap::Subcommand;
@ -27,18 +28,18 @@ pub(super) enum MediaCommand {
DeleteList, DeleteList,
/// - Deletes all remote (and optionally local) media created before or /// - Deletes all remote (and optionally local) media created before or
/// after \[duration] time using filesystem metadata first created at /// after [duration] time using filesystem metadata first created at date,
/// date, or fallback to last modified date. This will always ignore /// or fallback to last modified date. This will always ignore errors by
/// errors by default. /// default.
DeletePastRemoteMedia { DeletePastRemoteMedia {
/// - The relative time (e.g. 30s, 5m, 7d) within which to search /// - The relative time (e.g. 30s, 5m, 7d) within which to search
duration: String, duration: String,
/// - Only delete media created more recently than \[duration] ago /// - Only delete media created before [duration] ago
#[arg(long, short)] #[arg(long, short)]
before: bool, before: bool,
/// - Only delete media created after \[duration] ago /// - Only delete media created after [duration] ago
#[arg(long, short)] #[arg(long, short)]
after: bool, after: bool,

View file

@ -91,6 +91,7 @@ async fn process_command(services: Arc<Services>, input: &CommandInput) -> Proce
} }
} }
#[allow(clippy::result_large_err)]
fn handle_panic(error: &Error, command: &CommandInput) -> ProcessorResult { fn handle_panic(error: &Error, command: &CommandInput) -> ProcessorResult {
let link = let link =
"Please submit a [bug report](https://github.com/girlbossceo/conduwuit/issues/new). 🥺"; "Please submit a [bug report](https://github.com/girlbossceo/conduwuit/issues/new). 🥺";
@ -100,7 +101,7 @@ fn handle_panic(error: &Error, command: &CommandInput) -> ProcessorResult {
Err(reply(content, command.reply_id.as_deref())) Err(reply(content, command.reply_id.as_deref()))
} }
// Parse and process a message from the admin room /// Parse and process a message from the admin room
async fn process( async fn process(
context: &Command<'_>, context: &Command<'_>,
command: AdminCommand, command: AdminCommand,
@ -164,7 +165,8 @@ fn capture_create(context: &Command<'_>) -> (Arc<Capture>, Arc<Mutex<String>>) {
(capture, logs) (capture, logs)
} }
// Parse chat messages from the admin room into an AdminCommand object /// Parse chat messages from the admin room into an AdminCommand object
#[allow(clippy::result_large_err)]
fn parse<'a>( fn parse<'a>(
services: &Arc<Services>, services: &Arc<Services>,
input: &'a CommandInput, input: &'a CommandInput,
@ -232,7 +234,7 @@ fn complete_command(mut cmd: clap::Command, line: &str) -> String {
ret.join(" ") ret.join(" ")
} }
// Parse chat messages from the admin room into an AdminCommand object /// Parse chat messages from the admin room into an AdminCommand object
fn parse_line(command_line: &str) -> Vec<String> { fn parse_line(command_line: &str) -> Vec<String> {
let mut argv = command_line let mut argv = command_line
.split_whitespace() .split_whitespace()

View file

@ -1,15 +1,16 @@
use std::{borrow::Cow, collections::BTreeMap, ops::Deref}; use std::{borrow::Cow, collections::BTreeMap, ops::Deref, sync::Arc};
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{ use conduwuit::{
Err, Result, apply, at, is_zero, Err, Result, apply, at, is_zero,
utils::{ utils::{
IterStream, stream::{IterStream, ReadyExt, TryIgnore, TryParallelExt},
stream::{ReadyExt, TryIgnore, TryParallelExt},
string::EMPTY, string::EMPTY,
}, },
}; };
use futures::{FutureExt, StreamExt, TryStreamExt}; use conduwuit_database::Map;
use conduwuit_service::Services;
use futures::{FutureExt, Stream, StreamExt, TryStreamExt};
use ruma::events::room::message::RoomMessageEventContent; use ruma::events::room::message::RoomMessageEventContent;
use tokio::time::Instant; use tokio::time::Instant;
@ -172,22 +173,18 @@ pub(super) async fn compact(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
use conduwuit_database::compact::Options; use conduwuit_database::compact::Options;
let default_all_maps = map let default_all_maps: Option<_> = map.is_none().then(|| {
.is_none() self.services
.then(|| { .db
self.services .keys()
.db .map(Deref::deref)
.keys() .map(ToOwned::to_owned)
.map(Deref::deref) });
.map(ToOwned::to_owned)
})
.into_iter()
.flatten();
let maps: Vec<_> = map let maps: Vec<_> = map
.unwrap_or_default() .unwrap_or_default()
.into_iter() .into_iter()
.chain(default_all_maps) .chain(default_all_maps.into_iter().flatten())
.map(|map| self.services.db.get(&map)) .map(|map| self.services.db.get(&map))
.filter_map(Result::ok) .filter_map(Result::ok)
.cloned() .cloned()
@ -237,25 +234,8 @@ pub(super) async fn raw_count(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let default_all_maps = map
.is_none()
.then(|| self.services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
let maps: Vec<_> = map
.iter()
.map(String::as_str)
.chain(default_all_maps)
.map(|map| self.services.db.get(map))
.filter_map(Result::ok)
.cloned()
.collect();
let timer = Instant::now(); let timer = Instant::now();
let count = maps let count = with_maps_or(map.as_deref(), self.services)
.iter()
.stream()
.then(|map| map.raw_count_prefix(&prefix)) .then(|map| map.raw_count_prefix(&prefix))
.ready_fold(0_usize, usize::saturating_add) .ready_fold(0_usize, usize::saturating_add)
.await; .await;
@ -300,25 +280,8 @@ pub(super) async fn raw_keys_sizes(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let default_all_maps = map
.is_none()
.then(|| self.services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
let maps: Vec<_> = map
.iter()
.map(String::as_str)
.chain(default_all_maps)
.map(|map| self.services.db.get(map))
.filter_map(Result::ok)
.cloned()
.collect();
let timer = Instant::now(); let timer = Instant::now();
let result = maps let result = with_maps_or(map.as_deref(), self.services)
.iter()
.stream()
.map(|map| map.raw_keys_prefix(&prefix)) .map(|map| map.raw_keys_prefix(&prefix))
.flatten() .flatten()
.ignore_err() .ignore_err()
@ -345,25 +308,8 @@ pub(super) async fn raw_keys_total(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let default_all_maps = map
.is_none()
.then(|| self.services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
let maps: Vec<_> = map
.iter()
.map(String::as_str)
.chain(default_all_maps)
.map(|map| self.services.db.get(map))
.filter_map(Result::ok)
.cloned()
.collect();
let timer = Instant::now(); let timer = Instant::now();
let result = maps let result = with_maps_or(map.as_deref(), self.services)
.iter()
.stream()
.map(|map| map.raw_keys_prefix(&prefix)) .map(|map| map.raw_keys_prefix(&prefix))
.flatten() .flatten()
.ignore_err() .ignore_err()
@ -387,25 +333,8 @@ pub(super) async fn raw_vals_sizes(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let default_all_maps = map
.is_none()
.then(|| self.services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
let maps: Vec<_> = map
.iter()
.map(String::as_str)
.chain(default_all_maps)
.map(|map| self.services.db.get(map))
.filter_map(Result::ok)
.cloned()
.collect();
let timer = Instant::now(); let timer = Instant::now();
let result = maps let result = with_maps_or(map.as_deref(), self.services)
.iter()
.stream()
.map(|map| map.raw_stream_prefix(&prefix)) .map(|map| map.raw_stream_prefix(&prefix))
.flatten() .flatten()
.ignore_err() .ignore_err()
@ -433,25 +362,8 @@ pub(super) async fn raw_vals_total(
) -> Result<RoomMessageEventContent> { ) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let default_all_maps = map
.is_none()
.then(|| self.services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
let maps: Vec<_> = map
.iter()
.map(String::as_str)
.chain(default_all_maps)
.map(|map| self.services.db.get(map))
.filter_map(Result::ok)
.cloned()
.collect();
let timer = Instant::now(); let timer = Instant::now();
let result = maps let result = with_maps_or(map.as_deref(), self.services)
.iter()
.stream()
.map(|map| map.raw_stream_prefix(&prefix)) .map(|map| map.raw_stream_prefix(&prefix))
.flatten() .flatten()
.ignore_err() .ignore_err()
@ -573,3 +485,20 @@ pub(super) async fn raw_maps(&self) -> Result<RoomMessageEventContent> {
Ok(RoomMessageEventContent::notice_markdown(format!("{list:#?}"))) Ok(RoomMessageEventContent::notice_markdown(format!("{list:#?}")))
} }
fn with_maps_or<'a>(
map: Option<&'a str>,
services: &'a Services,
) -> impl Stream<Item = &'a Arc<Map>> + Send + 'a {
let default_all_maps = map
.is_none()
.then(|| services.db.keys().map(Deref::deref))
.into_iter()
.flatten();
map.into_iter()
.chain(default_all_maps)
.map(|map| services.db.get(map))
.filter_map(Result::ok)
.stream()
}

View file

@ -2,7 +2,8 @@ use std::{collections::BTreeMap, fmt::Write as _};
use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room}; use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room};
use conduwuit::{ use conduwuit::{
PduBuilder, Result, debug, debug_warn, error, info, is_equal_to, Result, debug, debug_warn, error, info, is_equal_to,
matrix::pdu::PduBuilder,
utils::{self, ReadyExt}, utils::{self, ReadyExt},
warn, warn,
}; };

View file

@ -35,6 +35,7 @@ brotli_compression = [
] ]
[dependencies] [dependencies]
async-trait.workspace = true
axum-client-ip.workspace = true axum-client-ip.workspace = true
axum-extra.workspace = true axum-extra.workspace = true
axum.workspace = true axum.workspace = true

View file

@ -3,9 +3,13 @@ use std::fmt::Write;
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{ use conduwuit::{
Err, Error, PduBuilder, Result, debug_info, err, error, info, is_equal_to, utils, Err, Error, Result, debug_info, err, error, info, is_equal_to,
utils::ReadyExt, warn, matrix::pdu::PduBuilder,
utils,
utils::{ReadyExt, stream::BroadbandExt},
warn,
}; };
use conduwuit_service::Services;
use futures::{FutureExt, StreamExt}; use futures::{FutureExt, StreamExt};
use register::RegistrationKind; use register::RegistrationKind;
use ruma::{ use ruma::{
@ -29,7 +33,6 @@ use ruma::{
}, },
push, push,
}; };
use service::Services;
use super::{DEVICE_ID_LENGTH, SESSION_ID_LENGTH, TOKEN_LENGTH, join_room_by_id_helper}; use super::{DEVICE_ID_LENGTH, SESSION_ID_LENGTH, TOKEN_LENGTH, join_room_by_id_helper};
use crate::Ruma; use crate::Ruma;
@ -109,7 +112,7 @@ pub(crate) async fn get_register_available_route(
if !info.is_user_match(&user_id) { if !info.is_user_match(&user_id) {
return Err!(Request(Exclusive("Username is not in an appservice namespace."))); return Err!(Request(Exclusive("Username is not in an appservice namespace.")));
} }
}; }
if services.appservice.is_exclusive_user_id(&user_id).await { if services.appservice.is_exclusive_user_id(&user_id).await {
return Err!(Request(Exclusive("Username is reserved by an appservice."))); return Err!(Request(Exclusive("Username is reserved by an appservice.")));
@ -145,7 +148,7 @@ pub(crate) async fn register_route(
let is_guest = body.kind == RegistrationKind::Guest; let is_guest = body.kind == RegistrationKind::Guest;
let emergency_mode_enabled = services.config.emergency_password.is_some(); let emergency_mode_enabled = services.config.emergency_password.is_some();
if !services.globals.allow_registration() && body.appservice_info.is_none() { if !services.config.allow_registration && body.appservice_info.is_none() {
match (body.username.as_ref(), body.initial_device_display_name.as_ref()) { match (body.username.as_ref(), body.initial_device_display_name.as_ref()) {
| (Some(username), Some(device_display_name)) => { | (Some(username), Some(device_display_name)) => {
info!(%is_guest, user = %username, device_name = %device_display_name, "Rejecting registration attempt as registration is disabled"); info!(%is_guest, user = %username, device_name = %device_display_name, "Rejecting registration attempt as registration is disabled");
@ -159,14 +162,14 @@ pub(crate) async fn register_route(
| (None, _) => { | (None, _) => {
info!(%is_guest, "Rejecting registration attempt as registration is disabled"); info!(%is_guest, "Rejecting registration attempt as registration is disabled");
}, },
}; }
return Err!(Request(Forbidden("Registration has been disabled."))); return Err!(Request(Forbidden("Registration has been disabled.")));
} }
if is_guest if is_guest
&& (!services.globals.allow_guest_registration() && (!services.config.allow_guest_registration
|| (services.globals.allow_registration() || (services.config.allow_registration
&& services.globals.registration_token.is_some())) && services.globals.registration_token.is_some()))
{ {
info!( info!(
@ -317,14 +320,14 @@ pub(crate) async fn register_route(
// Success! // Success!
}, },
| _ => match body.json_body { | _ => match body.json_body {
| Some(json) => { | Some(ref json) => {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH)); uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services.uiaa.create( services.uiaa.create(
&UserId::parse_with_server_name("", services.globals.server_name()) &UserId::parse_with_server_name("", services.globals.server_name())
.unwrap(), .unwrap(),
"".into(), "".into(),
&uiaainfo, &uiaainfo,
&json, json,
); );
return Err(Error::Uiaa(uiaainfo)); return Err(Error::Uiaa(uiaainfo));
}, },
@ -372,8 +375,12 @@ pub(crate) async fn register_route(
) )
.await?; .await?;
// Inhibit login does not work for guests if (!is_guest && body.inhibit_login)
if !is_guest && body.inhibit_login { || body
.appservice_info
.as_ref()
.is_some_and(|appservice| appservice.registration.device_management)
{
return Ok(register::v3::Response { return Ok(register::v3::Response {
access_token: None, access_token: None,
user_id, user_id,
@ -440,7 +447,7 @@ pub(crate) async fn register_route(
} }
// log in conduit admin channel if a guest registered // log in conduit admin channel if a guest registered
if body.appservice_info.is_none() && is_guest && services.globals.log_guest_registrations() { if body.appservice_info.is_none() && is_guest && services.config.log_guest_registrations {
debug_info!("New guest user \"{user_id}\" registered on this server."); debug_info!("New guest user \"{user_id}\" registered on this server.");
if !device_display_name.is_empty() { if !device_display_name.is_empty() {
@ -489,7 +496,7 @@ pub(crate) async fn register_route(
if body.appservice_info.is_none() if body.appservice_info.is_none()
&& !services.server.config.auto_join_rooms.is_empty() && !services.server.config.auto_join_rooms.is_empty()
&& (services.globals.allow_guests_auto_join_rooms() || !is_guest) && (services.config.allow_guests_auto_join_rooms || !is_guest)
{ {
for room in &services.server.config.auto_join_rooms { for room in &services.server.config.auto_join_rooms {
let Ok(room_id) = services.rooms.alias.resolve(room).await else { let Ok(room_id) = services.rooms.alias.resolve(room).await else {
@ -627,6 +634,26 @@ pub(crate) async fn change_password_route(
.ready_filter(|id| *id != sender_device) .ready_filter(|id| *id != sender_device)
.for_each(|id| services.users.remove_device(sender_user, id)) .for_each(|id| services.users.remove_device(sender_user, id))
.await; .await;
// Remove all pushers except the ones associated with this session
services
.pusher
.get_pushkeys(sender_user)
.map(ToOwned::to_owned)
.broad_filter_map(|pushkey| async move {
services
.pusher
.get_pusher_device(&pushkey)
.await
.ok()
.filter(|pusher_device| pusher_device != sender_device)
.is_some()
.then_some(pushkey)
})
.for_each(|pushkey| async move {
services.pusher.delete_pusher(sender_user, &pushkey).await;
})
.await;
} }
info!("User {sender_user} changed their password."); info!("User {sender_user} changed their password.");

View file

@ -1,5 +1,6 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, err}; use conduwuit::{Err, Result, err};
use conduwuit_service::Services;
use ruma::{ use ruma::{
RoomId, UserId, RoomId, UserId,
api::client::config::{ api::client::config::{
@ -15,7 +16,7 @@ use ruma::{
use serde::Deserialize; use serde::Deserialize;
use serde_json::{json, value::RawValue as RawJsonValue}; use serde_json::{json, value::RawValue as RawJsonValue};
use crate::{Result, Ruma, service::Services}; use crate::Ruma;
/// # `PUT /_matrix/client/r0/user/{userId}/account_data/{type}` /// # `PUT /_matrix/client/r0/user/{userId}/account_data/{type}`
/// ///

View file

@ -1,12 +1,12 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, Result, debug}; use conduwuit::{Err, Result, debug};
use conduwuit_service::Services;
use futures::StreamExt; use futures::StreamExt;
use rand::seq::SliceRandom; use rand::seq::SliceRandom;
use ruma::{ use ruma::{
OwnedServerName, RoomAliasId, RoomId, OwnedServerName, RoomAliasId, RoomId,
api::client::alias::{create_alias, delete_alias, get_alias}, api::client::alias::{create_alias, delete_alias, get_alias},
}; };
use service::Services;
use crate::Ruma; use crate::Ruma;

View file

@ -22,7 +22,13 @@ pub(crate) async fn appservice_ping(
))); )));
} }
if appservice_info.registration.url.is_none() { if appservice_info.registration.url.is_none()
|| appservice_info
.registration
.url
.as_ref()
.is_some_and(|url| url.is_empty() || url == "null")
{
return Err!(Request(UrlNotSet( return Err!(Request(UrlNotSet(
"Appservice does not have a URL set, there is nothing to ping." "Appservice does not have a URL set, there is nothing to ping."
))); )));

View file

@ -1,5 +1,7 @@
use std::cmp::Ordering;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, err}; use conduwuit::{Err, Result, err};
use ruma::{ use ruma::{
UInt, UInt,
api::client::backup::{ api::client::backup::{
@ -11,7 +13,7 @@ use ruma::{
}, },
}; };
use crate::{Result, Ruma}; use crate::Ruma;
/// # `POST /_matrix/client/r0/room_keys/version` /// # `POST /_matrix/client/r0/room_keys/version`
/// ///
@ -232,16 +234,77 @@ pub(crate) async fn add_backup_keys_for_session_route(
))); )));
} }
services // Check if we already have a better key
let mut ok_to_replace = true;
if let Some(old_key) = &services
.key_backups .key_backups
.add_key( .get_session(body.sender_user(), &body.version, &body.room_id, &body.session_id)
body.sender_user(), .await
&body.version, .ok()
&body.room_id, {
&body.session_id, let old_is_verified = old_key
&body.session_data, .get_field::<bool>("is_verified")?
) .unwrap_or_default();
.await?;
let new_is_verified = body
.session_data
.get_field::<bool>("is_verified")?
.ok_or_else(|| err!(Request(BadJson("`is_verified` field should exist"))))?;
// Prefer key that `is_verified`
if old_is_verified != new_is_verified {
if old_is_verified {
ok_to_replace = false;
}
} else {
// If both have same `is_verified`, prefer the one with lower
// `first_message_index`
let old_first_message_index = old_key
.get_field::<UInt>("first_message_index")?
.unwrap_or(UInt::MAX);
let new_first_message_index = body
.session_data
.get_field::<UInt>("first_message_index")?
.ok_or_else(|| {
err!(Request(BadJson("`first_message_index` field should exist")))
})?;
ok_to_replace = match new_first_message_index.cmp(&old_first_message_index) {
| Ordering::Less => true,
| Ordering::Greater => false,
| Ordering::Equal => {
// If both have same `first_message_index`, prefer the one with lower
// `forwarded_count`
let old_forwarded_count = old_key
.get_field::<UInt>("forwarded_count")?
.unwrap_or(UInt::MAX);
let new_forwarded_count = body
.session_data
.get_field::<UInt>("forwarded_count")?
.ok_or_else(|| {
err!(Request(BadJson("`forwarded_count` field should exist")))
})?;
new_forwarded_count < old_forwarded_count
},
};
}
}
if ok_to_replace {
services
.key_backups
.add_key(
body.sender_user(),
&body.version,
&body.room_id,
&body.session_id,
&body.session_data,
)
.await?;
}
Ok(add_backup_keys_for_session::v3::Response { Ok(add_backup_keys_for_session::v3::Response {
count: services count: services

View file

@ -1,18 +1,20 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Err, PduEvent, Result, at, debug_warn, err, ref_at, Err, Result, at, debug_warn, err,
matrix::pdu::PduEvent,
ref_at,
utils::{ utils::{
IterStream, IterStream,
future::TryExtExt, future::TryExtExt,
stream::{BroadbandExt, ReadyExt, TryIgnore, WidebandExt}, stream::{BroadbandExt, ReadyExt, TryIgnore, WidebandExt},
}, },
}; };
use conduwuit_service::rooms::{lazy_loading, lazy_loading::Options, short::ShortStateKey};
use futures::{ use futures::{
FutureExt, StreamExt, TryFutureExt, TryStreamExt, FutureExt, StreamExt, TryFutureExt, TryStreamExt,
future::{OptionFuture, join, join3, try_join3}, future::{OptionFuture, join, join3, try_join3},
}; };
use ruma::{OwnedEventId, UserId, api::client::context::get_context, events::StateEventType}; use ruma::{OwnedEventId, UserId, api::client::context::get_context, events::StateEventType};
use service::rooms::{lazy_loading, lazy_loading::Options, short::ShortStateKey};
use crate::{ use crate::{
Ruma, Ruma,
@ -105,7 +107,7 @@ pub(crate) async fn get_context_route(
.collect(); .collect();
let (base_event, events_before, events_after): (_, Vec<_>, Vec<_>) = let (base_event, events_before, events_after): (_, Vec<_>, Vec<_>) =
join3(base_event, events_before, events_after).await; join3(base_event, events_before, events_after).boxed().await;
let lazy_loading_context = lazy_loading::Context { let lazy_loading_context = lazy_loading::Context {
user_id: sender_user, user_id: sender_user,
@ -182,7 +184,7 @@ pub(crate) async fn get_context_route(
.await; .await;
Ok(get_context::v3::Response { Ok(get_context::v3::Response {
event: base_event.map(at!(1)).as_ref().map(PduEvent::to_room_event), event: base_event.map(at!(1)).map(PduEvent::into_room_event),
start: events_before start: events_before
.last() .last()
@ -201,13 +203,13 @@ pub(crate) async fn get_context_route(
events_before: events_before events_before: events_before
.into_iter() .into_iter()
.map(at!(1)) .map(at!(1))
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.collect(), .collect(),
events_after: events_after events_after: events_after
.into_iter() .into_iter()
.map(at!(1)) .map(at!(1))
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.collect(), .collect(),
state, state,

View file

@ -1,9 +1,9 @@
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{Err, err}; use conduwuit::{Err, Error, Result, debug, err, utils};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
MilliSecondsSinceUnixEpoch, MilliSecondsSinceUnixEpoch, OwnedDeviceId,
api::client::{ api::client::{
device::{self, delete_device, delete_devices, get_device, get_devices, update_device}, device::{self, delete_device, delete_devices, get_device, get_devices, update_device},
error::ErrorKind, error::ErrorKind,
@ -12,7 +12,7 @@ use ruma::{
}; };
use super::SESSION_ID_LENGTH; use super::SESSION_ID_LENGTH;
use crate::{Error, Result, Ruma, utils}; use crate::{Ruma, client::DEVICE_ID_LENGTH};
/// # `GET /_matrix/client/r0/devices` /// # `GET /_matrix/client/r0/devices`
/// ///
@ -59,26 +59,58 @@ pub(crate) async fn update_device_route(
InsecureClientIp(client): InsecureClientIp, InsecureClientIp(client): InsecureClientIp,
body: Ruma<update_device::v3::Request>, body: Ruma<update_device::v3::Request>,
) -> Result<update_device::v3::Response> { ) -> Result<update_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user();
let appservice = body.appservice_info.as_ref();
let mut device = services match services
.users .users
.get_device_metadata(sender_user, &body.device_id) .get_device_metadata(sender_user, &body.device_id)
.await .await
.map_err(|_| err!(Request(NotFound("Device not found."))))?; {
| Ok(mut device) => {
device.display_name.clone_from(&body.display_name);
device.last_seen_ip.clone_from(&Some(client.to_string()));
device
.last_seen_ts
.clone_from(&Some(MilliSecondsSinceUnixEpoch::now()));
device.display_name.clone_from(&body.display_name); services
device.last_seen_ip.clone_from(&Some(client.to_string())); .users
device .update_device_metadata(sender_user, &body.device_id, &device)
.last_seen_ts .await?;
.clone_from(&Some(MilliSecondsSinceUnixEpoch::now()));
services Ok(update_device::v3::Response {})
.users },
.update_device_metadata(sender_user, &body.device_id, &device) | Err(_) => {
.await?; let Some(appservice) = appservice else {
return Err!(Request(NotFound("Device not found.")));
};
if !appservice.registration.device_management {
return Err!(Request(NotFound("Device not found.")));
}
Ok(update_device::v3::Response {}) debug!(
"Creating new device for {sender_user} from appservice {} as MSC4190 is enabled \
and device ID does not exist",
appservice.registration.id
);
let device_id = OwnedDeviceId::from(utils::random_string(DEVICE_ID_LENGTH));
services
.users
.create_device(
sender_user,
&device_id,
&appservice.registration.as_token,
None,
Some(client.to_string()),
)
.await?;
return Ok(update_device::v3::Response {});
},
}
} }
/// # `DELETE /_matrix/client/r0/devices/{deviceId}` /// # `DELETE /_matrix/client/r0/devices/{deviceId}`
@ -95,8 +127,21 @@ pub(crate) async fn delete_device_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<delete_device::v3::Request>, body: Ruma<delete_device::v3::Request>,
) -> Result<delete_device::v3::Response> { ) -> Result<delete_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let (sender_user, sender_device) = body.sender();
let sender_device = body.sender_device.as_ref().expect("user is authenticated"); let appservice = body.appservice_info.as_ref();
if appservice.is_some_and(|appservice| appservice.registration.device_management) {
debug!(
"Skipping UIAA for {sender_user} as this is from an appservice and MSC4190 is \
enabled"
);
services
.users
.remove_device(sender_user, &body.device_id)
.await;
return Ok(delete_device::v3::Response {});
}
// UIAA // UIAA
let mut uiaainfo = UiaaInfo { let mut uiaainfo = UiaaInfo {
@ -120,11 +165,11 @@ pub(crate) async fn delete_device_route(
// Success! // Success!
}, },
| _ => match body.json_body { | _ => match body.json_body {
| Some(json) => { | Some(ref json) => {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH)); uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services services
.uiaa .uiaa
.create(sender_user, sender_device, &uiaainfo, &json); .create(sender_user, sender_device, &uiaainfo, json);
return Err!(Uiaa(uiaainfo)); return Err!(Uiaa(uiaainfo));
}, },
@ -142,11 +187,12 @@ pub(crate) async fn delete_device_route(
Ok(delete_device::v3::Response {}) Ok(delete_device::v3::Response {})
} }
/// # `PUT /_matrix/client/r0/devices/{deviceId}` /// # `POST /_matrix/client/v3/delete_devices`
/// ///
/// Deletes the given device. /// Deletes the given list of devices.
/// ///
/// - Requires UIAA to verify user password /// - Requires UIAA to verify user password unless from an appservice with
/// MSC4190 enabled.
/// ///
/// For each device: /// For each device:
/// - Invalidates access token /// - Invalidates access token
@ -158,8 +204,20 @@ pub(crate) async fn delete_devices_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<delete_devices::v3::Request>, body: Ruma<delete_devices::v3::Request>,
) -> Result<delete_devices::v3::Response> { ) -> Result<delete_devices::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let (sender_user, sender_device) = body.sender();
let sender_device = body.sender_device.as_ref().expect("user is authenticated"); let appservice = body.appservice_info.as_ref();
if appservice.is_some_and(|appservice| appservice.registration.device_management) {
debug!(
"Skipping UIAA for {sender_user} as this is from an appservice and MSC4190 is \
enabled"
);
for device_id in &body.devices {
services.users.remove_device(sender_user, device_id).await;
}
return Ok(delete_devices::v3::Response {});
}
// UIAA // UIAA
let mut uiaainfo = UiaaInfo { let mut uiaainfo = UiaaInfo {
@ -183,11 +241,11 @@ pub(crate) async fn delete_devices_route(
// Success! // Success!
}, },
| _ => match body.json_body { | _ => match body.json_body {
| Some(json) => { | Some(ref json) => {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH)); uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services services
.uiaa .uiaa
.create(sender_user, sender_device, &uiaainfo, &json); .create(sender_user, sender_device, &uiaainfo, json);
return Err(Error::Uiaa(uiaainfo)); return Err(Error::Uiaa(uiaainfo));
}, },

View file

@ -1,7 +1,19 @@
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{Err, Error, Result, info, warn}; use conduwuit::{
use futures::{StreamExt, TryFutureExt}; Err, Result, err, info,
utils::{
TryFutureExtExt,
math::Expected,
result::FlatOk,
stream::{ReadyExt, WidebandExt},
},
};
use conduwuit_service::Services;
use futures::{
FutureExt, StreamExt, TryFutureExt,
future::{join, join4, join5},
};
use ruma::{ use ruma::{
OwnedRoomId, RoomId, ServerName, UInt, UserId, OwnedRoomId, RoomId, ServerName, UInt, UserId,
api::{ api::{
@ -10,12 +22,11 @@ use ruma::{
get_public_rooms, get_public_rooms_filtered, get_room_visibility, get_public_rooms, get_public_rooms_filtered, get_room_visibility,
set_room_visibility, set_room_visibility,
}, },
error::ErrorKind,
room, room,
}, },
federation, federation,
}, },
directory::{Filter, PublicRoomJoinRule, PublicRoomsChunk, RoomNetwork}, directory::{Filter, PublicRoomJoinRule, PublicRoomsChunk, RoomNetwork, RoomTypeFilter},
events::{ events::{
StateEventType, StateEventType,
room::{ room::{
@ -25,7 +36,6 @@ use ruma::{
}, },
uint, uint,
}; };
use service::Services;
use crate::Ruma; use crate::Ruma;
@ -42,10 +52,13 @@ pub(crate) async fn get_public_rooms_filtered_route(
) -> Result<get_public_rooms_filtered::v3::Response> { ) -> Result<get_public_rooms_filtered::v3::Response> {
if let Some(server) = &body.server { if let Some(server) = &body.server {
if services if services
.server
.config .config
.forbidden_remote_room_directory_server_names .forbidden_remote_room_directory_server_names
.contains(server) .is_match(server.host())
|| services
.config
.forbidden_remote_server_names
.is_match(server.host())
{ {
return Err!(Request(Forbidden("Server is banned on this homeserver."))); return Err!(Request(Forbidden("Server is banned on this homeserver.")));
} }
@ -61,11 +74,7 @@ pub(crate) async fn get_public_rooms_filtered_route(
) )
.await .await
.map_err(|e| { .map_err(|e| {
warn!(?body.server, "Failed to return /publicRooms: {e}"); err!(Request(Unknown(warn!(?body.server, "Failed to return /publicRooms: {e}"))))
Error::BadRequest(
ErrorKind::Unknown,
"Failed to return the requested server's public room list.",
)
})?; })?;
Ok(response) Ok(response)
@ -84,10 +93,13 @@ pub(crate) async fn get_public_rooms_route(
) -> Result<get_public_rooms::v3::Response> { ) -> Result<get_public_rooms::v3::Response> {
if let Some(server) = &body.server { if let Some(server) = &body.server {
if services if services
.server
.config .config
.forbidden_remote_room_directory_server_names .forbidden_remote_room_directory_server_names
.contains(server) .is_match(server.host())
|| services
.config
.forbidden_remote_server_names
.is_match(server.host())
{ {
return Err!(Request(Forbidden("Server is banned on this homeserver."))); return Err!(Request(Forbidden("Server is banned on this homeserver.")));
} }
@ -103,11 +115,7 @@ pub(crate) async fn get_public_rooms_route(
) )
.await .await
.map_err(|e| { .map_err(|e| {
warn!(?body.server, "Failed to return /publicRooms: {e}"); err!(Request(Unknown(warn!(?body.server, "Failed to return /publicRooms: {e}"))))
Error::BadRequest(
ErrorKind::Unknown,
"Failed to return the requested server's public room list.",
)
})?; })?;
Ok(get_public_rooms::v3::Response { Ok(get_public_rooms::v3::Response {
@ -127,7 +135,7 @@ pub(crate) async fn set_room_visibility_route(
InsecureClientIp(client): InsecureClientIp, InsecureClientIp(client): InsecureClientIp,
body: Ruma<set_room_visibility::v3::Request>, body: Ruma<set_room_visibility::v3::Request>,
) -> Result<set_room_visibility::v3::Response> { ) -> Result<set_room_visibility::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user();
if !services.rooms.metadata.exists(&body.room_id).await { if !services.rooms.metadata.exists(&body.room_id).await {
// Return 404 if the room doesn't exist // Return 404 if the room doesn't exist
@ -171,10 +179,9 @@ pub(crate) async fn set_room_visibility_route(
.await; .await;
} }
return Err(Error::BadRequest( return Err!(Request(Forbidden(
ErrorKind::forbidden(),
"Publishing rooms to the room directory is not allowed", "Publishing rooms to the room directory is not allowed",
)); )));
} }
services.rooms.directory.set_public(&body.room_id); services.rooms.directory.set_public(&body.room_id);
@ -192,10 +199,7 @@ pub(crate) async fn set_room_visibility_route(
}, },
| room::Visibility::Private => services.rooms.directory.set_not_public(&body.room_id), | room::Visibility::Private => services.rooms.directory.set_not_public(&body.room_id),
| _ => { | _ => {
return Err(Error::BadRequest( return Err!(Request(InvalidParam("Room visibility type is not supported.",)));
ErrorKind::InvalidParam,
"Room visibility type is not supported.",
));
}, },
} }
@ -211,7 +215,7 @@ pub(crate) async fn get_room_visibility_route(
) -> Result<get_room_visibility::v3::Response> { ) -> Result<get_room_visibility::v3::Response> {
if !services.rooms.metadata.exists(&body.room_id).await { if !services.rooms.metadata.exists(&body.room_id).await {
// Return 404 if the room doesn't exist // Return 404 if the room doesn't exist
return Err(Error::BadRequest(ErrorKind::NotFound, "Room not found")); return Err!(Request(NotFound("Room not found")));
} }
Ok(get_room_visibility::v3::Response { Ok(get_room_visibility::v3::Response {
@ -259,8 +263,8 @@ pub(crate) async fn get_public_rooms_filtered_helper(
} }
// Use limit or else 10, with maximum 100 // Use limit or else 10, with maximum 100
let limit = limit.map_or(10, u64::from); let limit: usize = limit.map_or(10_u64, u64::from).try_into()?;
let mut num_since: u64 = 0; let mut num_since: usize = 0;
if let Some(s) = &since { if let Some(s) = &since {
let mut characters = s.chars(); let mut characters = s.chars();
@ -268,14 +272,14 @@ pub(crate) async fn get_public_rooms_filtered_helper(
| Some('n') => false, | Some('n') => false,
| Some('p') => true, | Some('p') => true,
| _ => { | _ => {
return Err(Error::BadRequest(ErrorKind::InvalidParam, "Invalid `since` token")); return Err!(Request(InvalidParam("Invalid `since` token")));
}, },
}; };
num_since = characters num_since = characters
.collect::<String>() .collect::<String>()
.parse() .parse()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid `since` token."))?; .map_err(|_| err!(Request(InvalidParam("Invalid `since` token."))))?;
if backwards { if backwards {
num_since = num_since.saturating_sub(limit); num_since = num_since.saturating_sub(limit);
@ -287,8 +291,12 @@ pub(crate) async fn get_public_rooms_filtered_helper(
.directory .directory
.public_rooms() .public_rooms()
.map(ToOwned::to_owned) .map(ToOwned::to_owned)
.then(|room_id| public_rooms_chunk(services, room_id)) .wide_then(|room_id| public_rooms_chunk(services, room_id))
.filter_map(|chunk| async move { .ready_filter_map(|chunk| {
if !filter.room_types.is_empty() && !filter.room_types.contains(&RoomTypeFilter::from(chunk.room_type.clone())) {
return None;
}
if let Some(query) = filter.generic_search_term.as_ref().map(|q| q.to_lowercase()) { if let Some(query) = filter.generic_search_term.as_ref().map(|q| q.to_lowercase()) {
if let Some(name) = &chunk.name { if let Some(name) = &chunk.name {
if name.as_str().to_lowercase().contains(&query) { if name.as_str().to_lowercase().contains(&query) {
@ -320,40 +328,24 @@ pub(crate) async fn get_public_rooms_filtered_helper(
all_rooms.sort_by(|l, r| r.num_joined_members.cmp(&l.num_joined_members)); all_rooms.sort_by(|l, r| r.num_joined_members.cmp(&l.num_joined_members));
let total_room_count_estimate = UInt::try_from(all_rooms.len()).unwrap_or_else(|_| uint!(0)); let total_room_count_estimate = UInt::try_from(all_rooms.len())
.unwrap_or_else(|_| uint!(0))
.into();
let chunk: Vec<_> = all_rooms let chunk: Vec<_> = all_rooms.into_iter().skip(num_since).take(limit).collect();
.into_iter()
.skip(
num_since
.try_into()
.expect("num_since should not be this high"),
)
.take(limit.try_into().expect("limit should not be this high"))
.collect();
let prev_batch = if num_since == 0 { let prev_batch = num_since.ne(&0).then_some(format!("p{num_since}"));
None
} else {
Some(format!("p{num_since}"))
};
let next_batch = if chunk.len() < limit.try_into().unwrap() { let next_batch = chunk
None .len()
} else { .ge(&limit)
Some(format!( .then_some(format!("n{}", num_since.expected_add(limit)));
"n{}",
num_since
.checked_add(limit)
.expect("num_since and limit should not be that large")
))
};
Ok(get_public_rooms_filtered::v3::Response { Ok(get_public_rooms_filtered::v3::Response {
chunk, chunk,
prev_batch, prev_batch,
next_batch, next_batch,
total_room_count_estimate: Some(total_room_count_estimate), total_room_count_estimate,
}) })
} }
@ -371,7 +363,7 @@ async fn user_can_publish_room(
.await .await
{ {
| Ok(event) => serde_json::from_str(event.content.get()) | Ok(event) => serde_json::from_str(event.content.get())
.map_err(|_| Error::bad_database("Invalid event content for m.room.power_levels")) .map_err(|_| err!(Database("Invalid event content for m.room.power_levels")))
.map(|content: RoomPowerLevelsEventContent| { .map(|content: RoomPowerLevelsEventContent| {
RoomPowerLevels::from(content) RoomPowerLevels::from(content)
.user_can_send_state(user_id, StateEventType::RoomHistoryVisibility) .user_can_send_state(user_id, StateEventType::RoomHistoryVisibility)
@ -391,60 +383,61 @@ async fn user_can_publish_room(
} }
async fn public_rooms_chunk(services: &Services, room_id: OwnedRoomId) -> PublicRoomsChunk { async fn public_rooms_chunk(services: &Services, room_id: OwnedRoomId) -> PublicRoomsChunk {
let name = services.rooms.state_accessor.get_name(&room_id).ok();
let room_type = services.rooms.state_accessor.get_room_type(&room_id).ok();
let canonical_alias = services
.rooms
.state_accessor
.get_canonical_alias(&room_id)
.ok();
let avatar_url = services.rooms.state_accessor.get_avatar(&room_id);
let topic = services.rooms.state_accessor.get_room_topic(&room_id).ok();
let world_readable = services.rooms.state_accessor.is_world_readable(&room_id);
let join_rule = services
.rooms
.state_accessor
.room_state_get_content(&room_id, &StateEventType::RoomJoinRules, "")
.map_ok(|c: RoomJoinRulesEventContent| match c.join_rule {
| JoinRule::Public => PublicRoomJoinRule::Public,
| JoinRule::Knock => "knock".into(),
| JoinRule::KnockRestricted(_) => "knock_restricted".into(),
| _ => "invite".into(),
});
let guest_can_join = services.rooms.state_accessor.guest_can_join(&room_id);
let num_joined_members = services.rooms.state_cache.room_joined_count(&room_id);
let (
(avatar_url, canonical_alias, guest_can_join, join_rule, name),
(num_joined_members, room_type, topic, world_readable),
) = join(
join5(avatar_url, canonical_alias, guest_can_join, join_rule, name),
join4(num_joined_members, room_type, topic, world_readable),
)
.boxed()
.await;
PublicRoomsChunk { PublicRoomsChunk {
canonical_alias: services avatar_url: avatar_url.into_option().unwrap_or_default().url,
.rooms canonical_alias,
.state_accessor guest_can_join,
.get_canonical_alias(&room_id) join_rule: join_rule.unwrap_or_default(),
.await name,
.ok(), num_joined_members: num_joined_members
name: services.rooms.state_accessor.get_name(&room_id).await.ok(), .map(TryInto::try_into)
num_joined_members: services .map(Result::ok)
.rooms .flat_ok()
.state_cache .unwrap_or_else(|| uint!(0)),
.room_joined_count(&room_id)
.await
.unwrap_or(0)
.try_into()
.expect("joined count overflows ruma UInt"),
topic: services
.rooms
.state_accessor
.get_room_topic(&room_id)
.await
.ok(),
world_readable: services
.rooms
.state_accessor
.is_world_readable(&room_id)
.await,
guest_can_join: services.rooms.state_accessor.guest_can_join(&room_id).await,
avatar_url: services
.rooms
.state_accessor
.get_avatar(&room_id)
.await
.into_option()
.unwrap_or_default()
.url,
join_rule: services
.rooms
.state_accessor
.room_state_get_content(&room_id, &StateEventType::RoomJoinRules, "")
.map_ok(|c: RoomJoinRulesEventContent| match c.join_rule {
| JoinRule::Public => PublicRoomJoinRule::Public,
| JoinRule::Knock => "knock".into(),
| JoinRule::KnockRestricted(_) => "knock_restricted".into(),
| _ => "invite".into(),
})
.await
.unwrap_or_default(),
room_type: services
.rooms
.state_accessor
.get_room_type(&room_id)
.await
.ok(),
room_id, room_id,
room_type,
topic,
world_readable,
} }
} }

View file

@ -1,8 +1,8 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::err; use conduwuit::{Result, err};
use ruma::api::client::filter::{create_filter, get_filter}; use ruma::api::client::filter::{create_filter, get_filter};
use crate::{Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/client/r0/user/{userId}/filter/{filterId}` /// # `GET /_matrix/client/r0/user/{userId}/filter/{filterId}`
/// ///

View file

@ -1,7 +1,8 @@
use std::collections::{BTreeMap, HashMap, HashSet}; use std::collections::{BTreeMap, HashMap, HashSet};
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, Error, Result, debug, debug_warn, err, info, result::NotFound, utils}; use conduwuit::{Err, Error, Result, debug, debug_warn, err, result::NotFound, utils};
use conduwuit_service::{Services, users::parse_master_key};
use futures::{StreamExt, stream::FuturesUnordered}; use futures::{StreamExt, stream::FuturesUnordered};
use ruma::{ use ruma::{
OneTimeKeyAlgorithm, OwnedDeviceId, OwnedUserId, UserId, OneTimeKeyAlgorithm, OwnedDeviceId, OwnedUserId, UserId,
@ -9,7 +10,8 @@ use ruma::{
client::{ client::{
error::ErrorKind, error::ErrorKind,
keys::{ keys::{
claim_keys, get_key_changes, get_keys, upload_keys, upload_signatures, claim_keys, get_key_changes, get_keys, upload_keys,
upload_signatures::{self},
upload_signing_keys, upload_signing_keys,
}, },
uiaa::{AuthFlow, AuthType, UiaaInfo}, uiaa::{AuthFlow, AuthType, UiaaInfo},
@ -22,10 +24,7 @@ use ruma::{
use serde_json::json; use serde_json::json;
use super::SESSION_ID_LENGTH; use super::SESSION_ID_LENGTH;
use crate::{ use crate::Ruma;
Ruma,
service::{Services, users::parse_master_key},
};
/// # `POST /_matrix/client/r0/keys/upload` /// # `POST /_matrix/client/r0/keys/upload`
/// ///
@ -80,14 +79,26 @@ pub(crate) async fn upload_keys_route(
))); )));
} }
// TODO: merge this and the existing event? if let Ok(existing_keys) = services
// This check is needed to assure that signatures are kept
if services
.users .users
.get_device_keys(sender_user, sender_device) .get_device_keys(sender_user, sender_device)
.await .await
.is_err()
{ {
if existing_keys.json().get() == device_keys.json().get() {
debug!(
?sender_user,
?sender_device,
?device_keys,
"Ignoring user uploaded keys as they are an exact copy already in the \
database"
);
} else {
services
.users
.add_device_keys(sender_user, sender_device, device_keys)
.await;
}
} else {
services services
.users .users
.add_device_keys(sender_user, sender_device, device_keys) .add_device_keys(sender_user, sender_device, device_keys)
@ -166,7 +177,7 @@ pub(crate) async fn upload_signing_keys_route(
body.master_key.as_ref(), body.master_key.as_ref(),
) )
.await .await
.inspect_err(|e| info!(?e)) .inspect_err(|e| debug!(?e))
{ {
| Ok(exists) => { | Ok(exists) => {
if let Some(result) = exists { if let Some(result) = exists {
@ -296,53 +307,60 @@ async fn check_for_new_keys(
/// # `POST /_matrix/client/r0/keys/signatures/upload` /// # `POST /_matrix/client/r0/keys/signatures/upload`
/// ///
/// Uploads end-to-end key signatures from the sender user. /// Uploads end-to-end key signatures from the sender user.
///
/// TODO: clean this timo-code up more and integrate failures. tried to improve
/// it a bit to stop exploding the entire request on bad sigs, but needs way
/// more work.
pub(crate) async fn upload_signatures_route( pub(crate) async fn upload_signatures_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<upload_signatures::v3::Request>, body: Ruma<upload_signatures::v3::Request>,
) -> Result<upload_signatures::v3::Response> { ) -> Result<upload_signatures::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); if body.signed_keys.is_empty() {
debug!("Empty signed_keys sent in key signature upload");
return Ok(upload_signatures::v3::Response::new());
}
let sender_user = body.sender_user();
for (user_id, keys) in &body.signed_keys { for (user_id, keys) in &body.signed_keys {
for (key_id, key) in keys { for (key_id, key) in keys {
let key = serde_json::to_value(key) let Ok(key) = serde_json::to_value(key)
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid key JSON"))?; .inspect_err(|e| debug_warn!(?key_id, "Invalid \"key\" JSON: {e}"))
else {
continue;
};
for signature in key let Some(signatures) = key.get("signatures") else {
.get("signatures") continue;
.ok_or(Error::BadRequest(ErrorKind::InvalidParam, "Missing signatures field."))? };
.get(sender_user.to_string())
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid user in signatures field.",
))?
.as_object()
.ok_or(Error::BadRequest(ErrorKind::InvalidParam, "Invalid signature."))?
.clone()
{
// Signature validation?
let signature = (
signature.0,
signature
.1
.as_str()
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid signature value.",
))?
.to_owned(),
);
services let Some(sender_user_val) = signatures.get(sender_user.to_string()) else {
continue;
};
let Some(sender_user_object) = sender_user_val.as_object() else {
continue;
};
for (signature, val) in sender_user_object.clone() {
let Some(val) = val.as_str().map(ToOwned::to_owned) else {
continue;
};
let signature = (signature, val);
if let Err(_e) = services
.users .users
.sign_key(user_id, key_id, signature, sender_user) .sign_key(user_id, key_id, signature, sender_user)
.await?; .await
.inspect_err(|e| debug_warn!("{e}"))
{
continue;
}
} }
} }
} }
Ok(upload_signatures::v3::Response { Ok(upload_signatures::v3::Response { failures: BTreeMap::new() })
failures: BTreeMap::new(), // TODO: integrate
})
} }
/// # `POST /_matrix/client/r0/keys/changes` /// # `POST /_matrix/client/r0/keys/changes`

View file

@ -9,13 +9,25 @@ use std::{
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{ use conduwuit::{
Err, PduEvent, Result, StateKey, at, debug, debug_info, debug_warn, err, error, info, Err, Result, at, debug, debug_info, debug_warn, err, error, info,
pdu::{PduBuilder, gen_event_id_canonical_json}, matrix::{
StateKey,
pdu::{PduBuilder, PduEvent, gen_event_id, gen_event_id_canonical_json},
state_res,
},
result::{FlatOk, NotFound}, result::{FlatOk, NotFound},
state_res, trace, trace,
utils::{self, IterStream, ReadyExt, shuffle}, utils::{self, IterStream, ReadyExt, shuffle},
warn, warn,
}; };
use conduwuit_service::{
Services,
appservice::RegistrationInfo,
rooms::{
state::RoomMutexGuard,
state_compressor::{CompressedState, HashSetCompressStateEvent},
},
};
use futures::{FutureExt, StreamExt, TryFutureExt, future::join4, join}; use futures::{FutureExt, StreamExt, TryFutureExt, future::join4, join};
use ruma::{ use ruma::{
CanonicalJsonObject, CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName, CanonicalJsonObject, CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName,
@ -25,8 +37,9 @@ use ruma::{
error::ErrorKind, error::ErrorKind,
knock::knock_room, knock::knock_room,
membership::{ membership::{
ThirdPartySigned, ban_user, forget_room, get_member_events, invite_user, ThirdPartySigned, ban_user, forget_room,
join_room_by_id, join_room_by_id_or_alias, get_member_events::{self, v3::MembershipEventFilter},
invite_user, join_room_by_id, join_room_by_id_or_alias,
joined_members::{self, v3::RoomMember}, joined_members::{self, v3::RoomMember},
joined_rooms, kick_user, leave_room, unban_user, joined_rooms, kick_user, leave_room, unban_user,
}, },
@ -43,15 +56,6 @@ use ruma::{
}, },
}, },
}; };
use service::{
Services,
appservice::RegistrationInfo,
pdu::gen_event_id,
rooms::{
state::RoomMutexGuard,
state_compressor::{CompressedState, HashSetCompressStateEvent},
},
};
use crate::{Ruma, client::full_user_deactivate}; use crate::{Ruma, client::full_user_deactivate};
@ -75,10 +79,9 @@ async fn banned_room_check(
if let Some(room_id) = room_id { if let Some(room_id) = room_id {
if services.rooms.metadata.is_banned(room_id).await if services.rooms.metadata.is_banned(room_id).await
|| services || services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&room_id.server_name().unwrap().to_owned()) .is_match(room_id.server_name().unwrap().host())
{ {
warn!( warn!(
"User {user_id} who is not an admin attempted to send an invite for or \ "User {user_id} who is not an admin attempted to send an invite for or \
@ -116,10 +119,9 @@ async fn banned_room_check(
} }
} else if let Some(server_name) = server_name { } else if let Some(server_name) = server_name {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server_name.to_owned()) .is_match(server_name.host())
{ {
warn!( warn!(
"User {user_id} who is not an admin tried joining a room which has the server \ "User {user_id} who is not an admin tried joining a room which has the server \
@ -474,9 +476,9 @@ pub(crate) async fn leave_room_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<leave_room::v3::Request>, body: Ruma<leave_room::v3::Request>,
) -> Result<leave_room::v3::Response> { ) -> Result<leave_room::v3::Response> {
leave_room(&services, body.sender_user(), &body.room_id, body.reason.clone()).await?; leave_room(&services, body.sender_user(), &body.room_id, body.reason.clone())
.await
Ok(leave_room::v3::Response::new()) .map(|()| leave_room::v3::Response::new())
} }
/// # `POST /_matrix/client/r0/rooms/{roomId}/invite` /// # `POST /_matrix/client/r0/rooms/{roomId}/invite`
@ -490,7 +492,7 @@ pub(crate) async fn invite_user_route(
) -> Result<invite_user::v3::Response> { ) -> Result<invite_user::v3::Response> {
let sender_user = body.sender_user(); let sender_user = body.sender_user();
if !services.users.is_admin(sender_user).await && services.globals.block_non_admin_invites() { if !services.users.is_admin(sender_user).await && services.config.block_non_admin_invites {
info!( info!(
"User {sender_user} is not an admin and attempted to send an invite to room {}", "User {sender_user} is not an admin and attempted to send an invite to room {}",
&body.room_id &body.room_id
@ -768,6 +770,54 @@ pub(crate) async fn joined_rooms_route(
}) })
} }
fn membership_filter(
pdu: PduEvent,
for_membership: Option<&MembershipEventFilter>,
not_membership: Option<&MembershipEventFilter>,
) -> Option<PduEvent> {
let membership_state_filter = match for_membership {
| Some(MembershipEventFilter::Ban) => MembershipState::Ban,
| Some(MembershipEventFilter::Invite) => MembershipState::Invite,
| Some(MembershipEventFilter::Knock) => MembershipState::Knock,
| Some(MembershipEventFilter::Leave) => MembershipState::Leave,
| Some(_) | None => MembershipState::Join,
};
let not_membership_state_filter = match not_membership {
| Some(MembershipEventFilter::Ban) => MembershipState::Ban,
| Some(MembershipEventFilter::Invite) => MembershipState::Invite,
| Some(MembershipEventFilter::Join) => MembershipState::Join,
| Some(MembershipEventFilter::Knock) => MembershipState::Knock,
| Some(_) | None => MembershipState::Leave,
};
let evt_membership = pdu.get_content::<RoomMemberEventContent>().ok()?.membership;
if for_membership.is_some() && not_membership.is_some() {
if membership_state_filter != evt_membership
|| not_membership_state_filter == evt_membership
{
None
} else {
Some(pdu)
}
} else if for_membership.is_some() && not_membership.is_none() {
if membership_state_filter != evt_membership {
None
} else {
Some(pdu)
}
} else if not_membership.is_some() && for_membership.is_none() {
if not_membership_state_filter == evt_membership {
None
} else {
Some(pdu)
}
} else {
Some(pdu)
}
}
/// # `POST /_matrix/client/r0/rooms/{roomId}/members` /// # `POST /_matrix/client/r0/rooms/{roomId}/members`
/// ///
/// Lists all joined users in a room (TODO: at a specific point in time, with a /// Lists all joined users in a room (TODO: at a specific point in time, with a
@ -779,6 +829,8 @@ pub(crate) async fn get_member_events_route(
body: Ruma<get_member_events::v3::Request>, body: Ruma<get_member_events::v3::Request>,
) -> Result<get_member_events::v3::Response> { ) -> Result<get_member_events::v3::Response> {
let sender_user = body.sender_user(); let sender_user = body.sender_user();
let membership = body.membership.as_ref();
let not_membership = body.not_membership.as_ref();
if !services if !services
.rooms .rooms
@ -797,6 +849,7 @@ pub(crate) async fn get_member_events_route(
.ready_filter_map(Result::ok) .ready_filter_map(Result::ok)
.ready_filter(|((ty, _), _)| *ty == StateEventType::RoomMember) .ready_filter(|((ty, _), _)| *ty == StateEventType::RoomMember)
.map(at!(1)) .map(at!(1))
.ready_filter_map(|pdu| membership_filter(pdu, membership, not_membership))
.map(PduEvent::into_member_event) .map(PduEvent::into_member_event)
.collect() .collect()
.await, .await,
@ -1576,7 +1629,7 @@ pub(crate) async fn invite_helper(
reason: Option<String>, reason: Option<String>,
is_direct: bool, is_direct: bool,
) -> Result { ) -> Result {
if !services.users.is_admin(sender_user).await && services.globals.block_non_admin_invites() { if !services.users.is_admin(sender_user).await && services.config.block_non_admin_invites {
info!( info!(
"User {sender_user} is not an admin and attempted to send an invite to room \ "User {sender_user} is not an admin and attempted to send an invite to room \
{room_id}" {room_id}"
@ -1711,8 +1764,8 @@ pub(crate) async fn invite_helper(
Ok(()) Ok(())
} }
// Make a user leave all their joined rooms, forgets all rooms, and ignores // Make a user leave all their joined rooms, rescinds knocks, forgets all rooms,
// errors // and ignores errors
pub async fn leave_all_rooms(services: &Services, user_id: &UserId) { pub async fn leave_all_rooms(services: &Services, user_id: &UserId) {
let rooms_joined = services let rooms_joined = services
.rooms .rooms
@ -1726,7 +1779,17 @@ pub async fn leave_all_rooms(services: &Services, user_id: &UserId) {
.rooms_invited(user_id) .rooms_invited(user_id)
.map(|(r, _)| r); .map(|(r, _)| r);
let all_rooms: Vec<_> = rooms_joined.chain(rooms_invited).collect().await; let rooms_knocked = services
.rooms
.state_cache
.rooms_knocked(user_id)
.map(|(r, _)| r);
let all_rooms: Vec<_> = rooms_joined
.chain(rooms_invited)
.chain(rooms_knocked)
.collect()
.await;
for room_id in all_rooms { for room_id in all_rooms {
// ignore errors // ignore errors
@ -1743,7 +1806,40 @@ pub async fn leave_room(
user_id: &UserId, user_id: &UserId,
room_id: &RoomId, room_id: &RoomId,
reason: Option<String>, reason: Option<String>,
) -> Result<()> { ) -> Result {
let default_member_content = RoomMemberEventContent {
membership: MembershipState::Leave,
reason: reason.clone(),
join_authorized_via_users_server: None,
is_direct: None,
avatar_url: None,
displayname: None,
third_party_invite: None,
blurhash: None,
};
if services.rooms.metadata.is_banned(room_id).await
|| services.rooms.metadata.is_disabled(room_id).await
{
// the room is banned/disabled, the room must be rejected locally since we
// cant/dont want to federate with this server
services
.rooms
.state_cache
.update_membership(
room_id,
user_id,
default_member_content,
user_id,
None,
None,
true,
)
.await?;
return Ok(());
}
// Ask a remote server if we don't have this room and are not knocking on it // Ask a remote server if we don't have this room and are not knocking on it
if !services if !services
.rooms .rooms
@ -1776,7 +1872,7 @@ pub async fn leave_room(
.update_membership( .update_membership(
room_id, room_id,
user_id, user_id,
RoomMemberEventContent::new(MembershipState::Leave), default_member_content,
user_id, user_id,
last_state, last_state,
None, None,
@ -1796,26 +1892,23 @@ pub async fn leave_room(
) )
.await .await
else { else {
// Fix for broken rooms debug_warn!(
warn!(
"Trying to leave a room you are not a member of, marking room as left locally." "Trying to leave a room you are not a member of, marking room as left locally."
); );
services return services
.rooms .rooms
.state_cache .state_cache
.update_membership( .update_membership(
room_id, room_id,
user_id, user_id,
RoomMemberEventContent::new(MembershipState::Leave), default_member_content,
user_id, user_id,
None, None,
None, None,
true, true,
) )
.await?; .await;
return Ok(());
}; };
services services
@ -1845,7 +1938,7 @@ async fn remote_leave_room(
room_id: &RoomId, room_id: &RoomId,
) -> Result<()> { ) -> Result<()> {
let mut make_leave_response_and_server = let mut make_leave_response_and_server =
Err!(BadServerResponse("No server available to assist in leaving.")); Err!(BadServerResponse("No remote server available to assist in leaving {room_id}."));
let mut servers: HashSet<OwnedServerName> = services let mut servers: HashSet<OwnedServerName> = services
.rooms .rooms
@ -1925,20 +2018,25 @@ async fn remote_leave_room(
let (make_leave_response, remote_server) = make_leave_response_and_server?; let (make_leave_response, remote_server) = make_leave_response_and_server?;
let Some(room_version_id) = make_leave_response.room_version else { let Some(room_version_id) = make_leave_response.room_version else {
return Err!(BadServerResponse("Remote room version is not supported by conduwuit")); return Err!(BadServerResponse(warn!(
"No room version was returned by {remote_server} for {room_id}, room version is \
likely not supported by conduwuit"
)));
}; };
if !services.server.supported_room_version(&room_version_id) { if !services.server.supported_room_version(&room_version_id) {
return Err!(BadServerResponse( return Err!(BadServerResponse(warn!(
"Remote room version {room_version_id} is not supported by conduwuit" "Remote room version {room_version_id} for {room_id} is not supported by conduwuit",
)); )));
} }
let mut leave_event_stub = serde_json::from_str::<CanonicalJsonObject>( let mut leave_event_stub = serde_json::from_str::<CanonicalJsonObject>(
make_leave_response.event.get(), make_leave_response.event.get(),
) )
.map_err(|e| { .map_err(|e| {
err!(BadServerResponse("Invalid make_leave event json received from server: {e:?}")) err!(BadServerResponse(warn!(
"Invalid make_leave event json received from {remote_server} for {room_id}: {e:?}"
)))
})?; })?;
// TODO: Is origin needed? // TODO: Is origin needed?

View file

@ -1,12 +1,24 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Err, Event, PduCount, PduEvent, Result, at, Err, Result, at,
matrix::{
Event,
pdu::{PduCount, PduEvent},
},
utils::{ utils::{
IterStream, ReadyExt, IterStream, ReadyExt,
result::{FlatOk, LogErr}, result::{FlatOk, LogErr},
stream::{BroadbandExt, TryIgnore, WidebandExt}, stream::{BroadbandExt, TryIgnore, WidebandExt},
}, },
}; };
use conduwuit_service::{
Services,
rooms::{
lazy_loading,
lazy_loading::{Options, Witness},
timeline::PdusIterItem,
},
};
use futures::{FutureExt, StreamExt, TryFutureExt, future::OptionFuture, pin_mut}; use futures::{FutureExt, StreamExt, TryFutureExt, future::OptionFuture, pin_mut};
use ruma::{ use ruma::{
RoomId, UserId, RoomId, UserId,
@ -17,14 +29,6 @@ use ruma::{
events::{AnyStateEvent, StateEventType, TimelineEventType, TimelineEventType::*}, events::{AnyStateEvent, StateEventType, TimelineEventType, TimelineEventType::*},
serde::Raw, serde::Raw,
}; };
use service::{
Services,
rooms::{
lazy_loading,
lazy_loading::{Options, Witness},
timeline::PdusIterItem,
},
};
use crate::Ruma; use crate::Ruma;
@ -157,7 +161,7 @@ pub(crate) async fn get_message_events_route(
let chunk = events let chunk = events
.into_iter() .into_iter()
.map(at!(1)) .map(at!(1))
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.collect(); .collect();
Ok(get_message_events::v3::Response { Ok(get_message_events::v3::Response {
@ -257,10 +261,9 @@ pub(crate) async fn is_ignored_pdu(
let ignored_type = IGNORED_MESSAGE_TYPES.binary_search(&pdu.kind).is_ok(); let ignored_type = IGNORED_MESSAGE_TYPES.binary_search(&pdu.kind).is_ok();
let ignored_server = services let ignored_server = services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(pdu.sender().server_name()); .is_match(pdu.sender().server_name().host());
if ignored_type if ignored_type
&& (ignored_server || services.users.user_is_ignored(&pdu.sender, user_id).await) && (ignored_server || services.users.user_is_ignored(&pdu.sender, user_id).await)

View file

@ -1,14 +1,14 @@
use std::time::Duration; use std::time::Duration;
use axum::extract::State; use axum::extract::State;
use conduwuit::utils; use conduwuit::{Error, Result, utils};
use ruma::{ use ruma::{
api::client::{account, error::ErrorKind}, api::client::{account, error::ErrorKind},
authentication::TokenType, authentication::TokenType,
}; };
use super::TOKEN_LENGTH; use super::TOKEN_LENGTH;
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `POST /_matrix/client/v3/user/{userId}/openid/request_token` /// # `POST /_matrix/client/v3/user/{userId}/openid/request_token`
/// ///

View file

@ -1,12 +1,10 @@
use std::time::Duration; use std::time::Duration;
use axum::extract::State; use axum::extract::State;
use ruma::api::client::{ use conduwuit::{Err, Result};
error::ErrorKind, use ruma::api::client::presence::{get_presence, set_presence};
presence::{get_presence, set_presence},
};
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `PUT /_matrix/client/r0/presence/{userId}/status` /// # `PUT /_matrix/client/r0/presence/{userId}/status`
/// ///
@ -15,24 +13,17 @@ pub(crate) async fn set_presence_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<set_presence::v3::Request>, body: Ruma<set_presence::v3::Request>,
) -> Result<set_presence::v3::Response> { ) -> Result<set_presence::v3::Response> {
if !services.globals.allow_local_presence() { if !services.config.allow_local_presence {
return Err(Error::BadRequest( return Err!(Request(Forbidden("Presence is disabled on this server")));
ErrorKind::forbidden(),
"Presence is disabled on this server",
));
} }
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); if body.sender_user() != body.user_id && body.appservice_info.is_none() {
if sender_user != &body.user_id && body.appservice_info.is_none() { return Err!(Request(InvalidParam("Not allowed to set presence of other users")));
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Not allowed to set presence of other users",
));
} }
services services
.presence .presence
.set_presence(sender_user, &body.presence, None, None, body.status_msg.clone()) .set_presence(body.sender_user(), &body.presence, None, None, body.status_msg.clone())
.await?; .await?;
Ok(set_presence::v3::Response {}) Ok(set_presence::v3::Response {})
@ -47,21 +38,15 @@ pub(crate) async fn get_presence_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<get_presence::v3::Request>, body: Ruma<get_presence::v3::Request>,
) -> Result<get_presence::v3::Response> { ) -> Result<get_presence::v3::Response> {
if !services.globals.allow_local_presence() { if !services.config.allow_local_presence {
return Err(Error::BadRequest( return Err!(Request(Forbidden("Presence is disabled on this server",)));
ErrorKind::forbidden(),
"Presence is disabled on this server",
));
} }
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let mut presence_event = None; let mut presence_event = None;
let has_shared_rooms = services let has_shared_rooms = services
.rooms .rooms
.state_cache .state_cache
.user_sees_user(sender_user, &body.user_id) .user_sees_user(body.sender_user(), &body.user_id)
.await; .await;
if has_shared_rooms { if has_shared_rooms {
@ -99,9 +84,6 @@ pub(crate) async fn get_presence_route(
presence: presence.content.presence, presence: presence.content.presence,
}) })
}, },
| _ => Err(Error::BadRequest( | _ => Err!(Request(NotFound("Presence state for this user was not found"))),
ErrorKind::NotFound,
"Presence state for this user was not found",
)),
} }
} }

View file

@ -3,10 +3,11 @@ use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Err, Error, Result, Err, Error, Result,
pdu::PduBuilder, matrix::pdu::PduBuilder,
utils::{IterStream, stream::TryIgnore}, utils::{IterStream, stream::TryIgnore},
warn, warn,
}; };
use conduwuit_service::Services;
use futures::{StreamExt, TryStreamExt, future::join3}; use futures::{StreamExt, TryStreamExt, future::join3};
use ruma::{ use ruma::{
OwnedMxcUri, OwnedRoomId, UserId, OwnedMxcUri, OwnedRoomId, UserId,
@ -22,7 +23,6 @@ use ruma::{
events::room::member::{MembershipState, RoomMemberEventContent}, events::room::member::{MembershipState, RoomMemberEventContent},
presence::PresenceState, presence::PresenceState,
}; };
use service::Services;
use crate::Ruma; use crate::Ruma;
@ -52,7 +52,7 @@ pub(crate) async fn set_displayname_route(
update_displayname(&services, &body.user_id, body.displayname.clone(), &all_joined_rooms) update_displayname(&services, &body.user_id, body.displayname.clone(), &all_joined_rooms)
.await; .await;
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence
@ -147,7 +147,7 @@ pub(crate) async fn set_avatar_url_route(
) )
.await; .await;
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence

View file

@ -1,5 +1,6 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, err}; use conduwuit::{Err, Error, Result, err};
use conduwuit_service::Services;
use ruma::{ use ruma::{
CanonicalJsonObject, CanonicalJsonValue, CanonicalJsonObject, CanonicalJsonValue,
api::client::{ api::client::{
@ -19,9 +20,8 @@ use ruma::{
RemovePushRuleError, Ruleset, RemovePushRuleError, Ruleset,
}, },
}; };
use service::Services;
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/client/r0/pushrules/` /// # `GET /_matrix/client/r0/pushrules/`
/// ///
@ -503,7 +503,7 @@ pub(crate) async fn set_pushers_route(
services services
.pusher .pusher
.set_pusher(sender_user, &body.action) .set_pusher(sender_user, body.sender_device(), &body.action)
.await?; .await?;
Ok(set_pusher::v3::Response::new()) Ok(set_pusher::v3::Response::new())

View file

@ -1,7 +1,7 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, PduCount, err}; use conduwuit::{Err, PduCount, Result, err};
use ruma::{ use ruma::{
MilliSecondsSinceUnixEpoch, MilliSecondsSinceUnixEpoch,
api::client::{read_marker::set_read_marker, receipt::create_receipt}, api::client::{read_marker::set_read_marker, receipt::create_receipt},
@ -11,7 +11,7 @@ use ruma::{
}, },
}; };
use crate::{Result, Ruma}; use crate::Ruma;
/// # `POST /_matrix/client/r0/rooms/{roomId}/read_markers` /// # `POST /_matrix/client/r0/rooms/{roomId}/read_markers`
/// ///
@ -50,7 +50,7 @@ pub(crate) async fn set_read_marker_route(
} }
// ping presence // ping presence
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
services services
.presence .presence
.ping_presence(sender_user, &ruma::presence::PresenceState::Online) .ping_presence(sender_user, &ruma::presence::PresenceState::Online)
@ -126,7 +126,7 @@ pub(crate) async fn create_receipt_route(
} }
// ping presence // ping presence
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
services services
.presence .presence
.ping_presence(sender_user, &ruma::presence::PresenceState::Online) .ping_presence(sender_user, &ruma::presence::PresenceState::Online)

View file

@ -1,9 +1,10 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Result, matrix::pdu::PduBuilder};
use ruma::{ use ruma::{
api::client::redact::redact_event, events::room::redaction::RoomRedactionEventContent, api::client::redact::redact_event, events::room::redaction::RoomRedactionEventContent,
}; };
use crate::{Result, Ruma, service::pdu::PduBuilder}; use crate::Ruma;
/// # `PUT /_matrix/client/r0/rooms/{roomId}/redact/{eventId}/{txnId}` /// # `PUT /_matrix/client/r0/rooms/{roomId}/redact/{eventId}/{txnId}`
/// ///

View file

@ -1,8 +1,10 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
PduCount, Result, at, Result, at,
matrix::pdu::PduCount,
utils::{IterStream, ReadyExt, result::FlatOk, stream::WidebandExt}, utils::{IterStream, ReadyExt, result::FlatOk, stream::WidebandExt},
}; };
use conduwuit_service::{Services, rooms::timeline::PdusIterItem};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
EventId, RoomId, UInt, UserId, EventId, RoomId, UInt, UserId,
@ -15,7 +17,6 @@ use ruma::{
}, },
events::{TimelineEventType, relation::RelationType}, events::{TimelineEventType, relation::RelationType},
}; };
use service::{Services, rooms::timeline::PdusIterItem};
use crate::Ruma; use crate::Ruma;

View file

@ -2,7 +2,8 @@ use std::time::Duration;
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{Err, info, utils::ReadyExt}; use conduwuit::{Err, Error, Result, debug_info, info, matrix::pdu::PduEvent, utils::ReadyExt};
use conduwuit_service::Services;
use rand::Rng; use rand::Rng;
use ruma::{ use ruma::{
EventId, RoomId, UserId, EventId, RoomId, UserId,
@ -15,10 +16,7 @@ use ruma::{
}; };
use tokio::time::sleep; use tokio::time::sleep;
use crate::{ use crate::Ruma;
Error, Result, Ruma, debug_info,
service::{Services, pdu::PduEvent},
};
/// # `POST /_matrix/client/v3/rooms/{roomId}/report` /// # `POST /_matrix/client/v3/rooms/{roomId}/report`
/// ///

View file

@ -2,8 +2,11 @@ use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Err, Error, Result, StateKey, debug_info, debug_warn, err, error, info, pdu::PduBuilder, warn, Err, Error, Result, debug_info, debug_warn, err, error, info,
matrix::{StateKey, pdu::PduBuilder},
warn,
}; };
use conduwuit_service::{Services, appservice::RegistrationInfo};
use futures::FutureExt; use futures::FutureExt;
use ruma::{ use ruma::{
CanonicalJsonObject, Int, OwnedRoomAliasId, OwnedRoomId, OwnedUserId, RoomId, RoomVersionId, CanonicalJsonObject, Int, OwnedRoomAliasId, OwnedRoomId, OwnedUserId, RoomId, RoomVersionId,
@ -29,7 +32,6 @@ use ruma::{
serde::{JsonObject, Raw}, serde::{JsonObject, Raw},
}; };
use serde_json::{json, value::to_raw_value}; use serde_json::{json, value::to_raw_value};
use service::{Services, appservice::RegistrationInfo};
use crate::{Ruma, client::invite_helper}; use crate::{Ruma, client::invite_helper};
@ -372,7 +374,7 @@ pub(crate) async fn create_room_route(
// Silently skip encryption events if they are not allowed // Silently skip encryption events if they are not allowed
if pdu_builder.event_type == TimelineEventType::RoomEncryption if pdu_builder.event_type == TimelineEventType::RoomEncryption
&& !services.globals.allow_encryption() && !services.config.allow_encryption
{ {
continue; continue;
} }

View file

@ -40,5 +40,5 @@ pub(crate) async fn get_room_event_route(
event.add_age().ok(); event.add_age().ok();
Ok(get_room_event::v3::Response { event: event.to_room_event() }) Ok(get_room_event::v3::Response { event: event.into_room_event() })
} }

View file

@ -55,7 +55,7 @@ pub(crate) async fn room_initial_sync_route(
chunk: events chunk: events
.into_iter() .into_iter()
.map(at!(1)) .map(at!(1))
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.collect(), .collect(),
}; };

View file

@ -2,9 +2,14 @@ mod aliases;
mod create; mod create;
mod event; mod event;
mod initial_sync; mod initial_sync;
mod summary;
mod upgrade; mod upgrade;
pub(crate) use self::{ pub(crate) use self::{
aliases::get_room_aliases_route, create::create_room_route, event::get_room_event_route, aliases::get_room_aliases_route,
initial_sync::room_initial_sync_route, upgrade::upgrade_room_route, create::create_room_route,
event::get_room_event_route,
initial_sync::room_initial_sync_route,
summary::{get_room_summary, get_room_summary_legacy},
upgrade::upgrade_room_route,
}; };

View file

@ -0,0 +1,330 @@
use axum::extract::State;
use axum_client_ip::InsecureClientIp;
use conduwuit::{
Err, Result, debug_warn, trace,
utils::{IterStream, future::TryExtExt},
};
use futures::{
FutureExt, StreamExt,
future::{OptionFuture, join3},
stream::FuturesUnordered,
};
use ruma::{
OwnedServerName, RoomId, UserId,
api::{
client::room::get_summary,
federation::space::{SpaceHierarchyParentSummary, get_hierarchy},
},
events::room::member::MembershipState,
space::SpaceRoomJoinRule::{self, *},
};
use service::Services;
use crate::{Ruma, RumaResponse};
/// # `GET /_matrix/client/unstable/im.nheko.summary/rooms/{roomIdOrAlias}/summary`
///
/// Returns a short description of the state of a room.
///
/// This is the "wrong" endpoint that some implementations/clients may use
/// according to the MSC. Request and response bodies are the same as
/// `get_room_summary`.
///
/// An implementation of [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266)
pub(crate) async fn get_room_summary_legacy(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_summary::msc3266::Request>,
) -> Result<RumaResponse<get_summary::msc3266::Response>> {
get_room_summary(State(services), InsecureClientIp(client), body)
.boxed()
.await
.map(RumaResponse)
}
/// # `GET /_matrix/client/unstable/im.nheko.summary/summary/{roomIdOrAlias}`
///
/// Returns a short description of the state of a room.
///
/// An implementation of [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266)
#[tracing::instrument(skip_all, fields(%client), name = "room_summary")]
pub(crate) async fn get_room_summary(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_summary::msc3266::Request>,
) -> Result<get_summary::msc3266::Response> {
let (room_id, servers) = services
.rooms
.alias
.resolve_with_servers(&body.room_id_or_alias, Some(body.via.clone()))
.await?;
if services.rooms.metadata.is_banned(&room_id).await {
return Err!(Request(Forbidden("This room is banned on this homeserver.")));
}
room_summary_response(&services, &room_id, &servers, body.sender_user.as_deref())
.boxed()
.await
}
async fn room_summary_response(
services: &Services,
room_id: &RoomId,
servers: &[OwnedServerName],
sender_user: Option<&UserId>,
) -> Result<get_summary::msc3266::Response> {
if services
.rooms
.state_cache
.server_in_room(services.globals.server_name(), room_id)
.await
{
return local_room_summary_response(services, room_id, sender_user)
.boxed()
.await;
}
let room =
remote_room_summary_hierarchy_response(services, room_id, servers, sender_user).await?;
Ok(get_summary::msc3266::Response {
room_id: room_id.to_owned(),
canonical_alias: room.canonical_alias,
avatar_url: room.avatar_url,
guest_can_join: room.guest_can_join,
name: room.name,
num_joined_members: room.num_joined_members,
topic: room.topic,
world_readable: room.world_readable,
join_rule: room.join_rule,
room_type: room.room_type,
room_version: room.room_version,
encryption: room.encryption,
allowed_room_ids: room.allowed_room_ids,
membership: sender_user.is_some().then_some(MembershipState::Leave),
})
}
async fn local_room_summary_response(
services: &Services,
room_id: &RoomId,
sender_user: Option<&UserId>,
) -> Result<get_summary::msc3266::Response> {
trace!(?sender_user, "Sending local room summary response for {room_id:?}");
let join_rule = services.rooms.state_accessor.get_join_rules(room_id);
let world_readable = services.rooms.state_accessor.is_world_readable(room_id);
let guest_can_join = services.rooms.state_accessor.guest_can_join(room_id);
let (join_rule, world_readable, guest_can_join) =
join3(join_rule, world_readable, guest_can_join).await;
trace!("{join_rule:?}, {world_readable:?}, {guest_can_join:?}");
user_can_see_summary(
services,
room_id,
&join_rule.clone().into(),
guest_can_join,
world_readable,
join_rule.allowed_rooms(),
sender_user,
)
.await?;
let canonical_alias = services
.rooms
.state_accessor
.get_canonical_alias(room_id)
.ok();
let name = services.rooms.state_accessor.get_name(room_id).ok();
let topic = services.rooms.state_accessor.get_room_topic(room_id).ok();
let room_type = services.rooms.state_accessor.get_room_type(room_id).ok();
let avatar_url = services
.rooms
.state_accessor
.get_avatar(room_id)
.map(|res| res.into_option().unwrap_or_default().url);
let room_version = services.rooms.state.get_room_version(room_id).ok();
let encryption = services
.rooms
.state_accessor
.get_room_encryption(room_id)
.ok();
let num_joined_members = services
.rooms
.state_cache
.room_joined_count(room_id)
.unwrap_or(0);
let membership: OptionFuture<_> = sender_user
.map(|sender_user| {
services
.rooms
.state_accessor
.get_member(room_id, sender_user)
.map_ok_or(MembershipState::Leave, |content| content.membership)
})
.into();
let (
canonical_alias,
name,
num_joined_members,
topic,
avatar_url,
room_type,
room_version,
encryption,
membership,
) = futures::join!(
canonical_alias,
name,
num_joined_members,
topic,
avatar_url,
room_type,
room_version,
encryption,
membership,
);
Ok(get_summary::msc3266::Response {
room_id: room_id.to_owned(),
canonical_alias,
avatar_url,
guest_can_join,
name,
num_joined_members: num_joined_members.try_into().unwrap_or_default(),
topic,
world_readable,
room_type,
room_version,
encryption,
membership,
allowed_room_ids: join_rule.allowed_rooms().map(Into::into).collect(),
join_rule: join_rule.into(),
})
}
/// used by MSC3266 to fetch a room's info if we do not know about it
async fn remote_room_summary_hierarchy_response(
services: &Services,
room_id: &RoomId,
servers: &[OwnedServerName],
sender_user: Option<&UserId>,
) -> Result<SpaceHierarchyParentSummary> {
trace!(?sender_user, ?servers, "Sending remote room summary response for {room_id:?}");
if !services.config.allow_federation {
return Err!(Request(Forbidden("Federation is disabled.")));
}
if services.rooms.metadata.is_disabled(room_id).await {
return Err!(Request(Forbidden(
"Federaton of room {room_id} is currently disabled on this server."
)));
}
let request = get_hierarchy::v1::Request::new(room_id.to_owned());
let mut requests: FuturesUnordered<_> = servers
.iter()
.map(|server| {
services
.sending
.send_federation_request(server, request.clone())
})
.collect();
while let Some(Ok(response)) = requests.next().await {
trace!("{response:?}");
let room = response.room.clone();
if room.room_id != room_id {
debug_warn!(
"Room ID {} returned does not belong to the requested room ID {}",
room.room_id,
room_id
);
continue;
}
return user_can_see_summary(
services,
room_id,
&room.join_rule,
room.guest_can_join,
room.world_readable,
room.allowed_room_ids.iter().map(AsRef::as_ref),
sender_user,
)
.await
.map(|()| room);
}
Err!(Request(NotFound(
"Room is unknown to this server and was unable to fetch over federation with the \
provided servers available"
)))
}
async fn user_can_see_summary<'a, I>(
services: &Services,
room_id: &RoomId,
join_rule: &SpaceRoomJoinRule,
guest_can_join: bool,
world_readable: bool,
allowed_room_ids: I,
sender_user: Option<&UserId>,
) -> Result
where
I: Iterator<Item = &'a RoomId> + Send,
{
let is_public_room = matches!(join_rule, Public | Knock | KnockRestricted);
match sender_user {
| Some(sender_user) => {
let user_can_see_state_events = services
.rooms
.state_accessor
.user_can_see_state_events(sender_user, room_id);
let is_guest = services.users.is_deactivated(sender_user).unwrap_or(false);
let user_in_allowed_restricted_room = allowed_room_ids
.stream()
.any(|room| services.rooms.state_cache.is_joined(sender_user, room));
let (user_can_see_state_events, is_guest, user_in_allowed_restricted_room) =
join3(user_can_see_state_events, is_guest, user_in_allowed_restricted_room)
.boxed()
.await;
if user_can_see_state_events
|| (is_guest && guest_can_join)
|| is_public_room
|| user_in_allowed_restricted_room
{
return Ok(());
}
Err!(Request(Forbidden(
"Room is not world readable, not publicly accessible/joinable, restricted room \
conditions not met, and guest access is forbidden. Not allowed to see details \
of this room."
)))
},
| None => {
if is_public_room || world_readable {
return Ok(());
}
Err!(Request(Forbidden(
"Room is not world readable or publicly accessible/joinable, authentication is \
required"
)))
},
}
}

View file

@ -1,7 +1,10 @@
use std::cmp::max; use std::cmp::max;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Error, Result, StateKey, err, info, pdu::PduBuilder}; use conduwuit::{
Error, Result, err, info,
matrix::{StateKey, pdu::PduBuilder},
};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
CanonicalJsonObject, RoomId, RoomVersionId, CanonicalJsonObject, RoomId, RoomVersionId,
@ -103,7 +106,7 @@ pub(crate) async fn upgrade_room_route(
// Use the m.room.tombstone event as the predecessor // Use the m.room.tombstone event as the predecessor
let predecessor = Some(ruma::events::room::create::PreviousRoom::new( let predecessor = Some(ruma::events::room::create::PreviousRoom::new(
body.room_id.clone(), body.room_id.clone(),
(*tombstone_event_id).to_owned(), Some(tombstone_event_id),
)); ));
// Send a m.room.create event containing a predecessor field and the applicable // Send a m.room.create event containing a predecessor field and the applicable

View file

@ -2,10 +2,12 @@ use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Err, PduEvent, Result, at, is_true, Err, Result, at, is_true,
matrix::pdu::PduEvent,
result::FlatOk, result::FlatOk,
utils::{IterStream, stream::ReadyExt}, utils::{IterStream, stream::ReadyExt},
}; };
use conduwuit_service::{Services, rooms::search::RoomQuery};
use futures::{FutureExt, StreamExt, TryFutureExt, TryStreamExt, future::OptionFuture}; use futures::{FutureExt, StreamExt, TryFutureExt, TryStreamExt, future::OptionFuture};
use ruma::{ use ruma::{
OwnedRoomId, RoomId, UInt, UserId, OwnedRoomId, RoomId, UInt, UserId,
@ -17,7 +19,6 @@ use ruma::{
serde::Raw, serde::Raw,
}; };
use search_events::v3::{Request, Response}; use search_events::v3::{Request, Response};
use service::{Services, rooms::search::RoomQuery};
use crate::Ruma; use crate::Ruma;
@ -143,7 +144,7 @@ async fn category_room_events(
.map(at!(2)) .map(at!(2))
.flatten() .flatten()
.stream() .stream()
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.map(|result| SearchResult { .map(|result| SearchResult {
rank: None, rank: None,
result: Some(result), result: Some(result),

View file

@ -1,11 +1,11 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, err}; use conduwuit::{Err, Result, err, matrix::pdu::PduBuilder, utils};
use ruma::{api::client::message::send_message_event, events::MessageLikeEventType}; use ruma::{api::client::message::send_message_event, events::MessageLikeEventType};
use serde_json::from_str; use serde_json::from_str;
use crate::{Result, Ruma, service::pdu::PduBuilder, utils}; use crate::Ruma;
/// # `PUT /_matrix/client/v3/rooms/{roomId}/send/{eventType}/{txnId}` /// # `PUT /_matrix/client/v3/rooms/{roomId}/send/{eventType}/{txnId}`
/// ///
@ -25,8 +25,7 @@ pub(crate) async fn send_message_event_route(
let appservice_info = body.appservice_info.as_ref(); let appservice_info = body.appservice_info.as_ref();
// Forbid m.room.encrypted if encryption is disabled // Forbid m.room.encrypted if encryption is disabled
if MessageLikeEventType::RoomEncrypted == body.event_type if MessageLikeEventType::RoomEncrypted == body.event_type && !services.config.allow_encryption
&& !services.globals.allow_encryption()
{ {
return Err!(Request(Forbidden("Encryption has been disabled"))); return Err!(Request(Forbidden("Encryption has been disabled")));
} }

View file

@ -2,7 +2,11 @@ use std::time::Duration;
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{Err, debug, err, info, utils::ReadyExt}; use conduwuit::{
Err, Error, Result, debug, err, info, utils,
utils::{ReadyExt, hash},
};
use conduwuit_service::uiaa::SESSION_ID_LENGTH;
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
UserId, UserId,
@ -22,10 +26,9 @@ use ruma::{
uiaa, uiaa,
}, },
}; };
use service::uiaa::SESSION_ID_LENGTH;
use super::{DEVICE_ID_LENGTH, TOKEN_LENGTH}; use super::{DEVICE_ID_LENGTH, TOKEN_LENGTH};
use crate::{Error, Result, Ruma, utils, utils::hash}; use crate::Ruma;
/// # `GET /_matrix/client/v3/login` /// # `GET /_matrix/client/v3/login`
/// ///

View file

@ -8,16 +8,16 @@ use conduwuit::{
Err, Result, Err, Result,
utils::{future::TryExtExt, stream::IterStream}, utils::{future::TryExtExt, stream::IterStream},
}; };
use futures::{StreamExt, TryFutureExt, future::OptionFuture}; use conduwuit_service::{
use ruma::{
OwnedRoomId, OwnedServerName, RoomId, UInt, UserId, api::client::space::get_hierarchy,
};
use service::{
Services, Services,
rooms::spaces::{ rooms::spaces::{
PaginationToken, SummaryAccessibility, get_parent_children_via, summary_to_chunk, PaginationToken, SummaryAccessibility, get_parent_children_via, summary_to_chunk,
}, },
}; };
use futures::{StreamExt, TryFutureExt, future::OptionFuture};
use ruma::{
OwnedRoomId, OwnedServerName, RoomId, UInt, UserId, api::client::space::get_hierarchy,
};
use crate::Ruma; use crate::Ruma;
@ -155,11 +155,7 @@ where
break; break;
} }
if children.is_empty() { if parents.len() > max_depth {
break;
}
if parents.len() >= max_depth {
continue; continue;
} }

View file

@ -1,5 +1,10 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, PduEvent, Result, err, pdu::PduBuilder, utils::BoolExt}; use conduwuit::{
Err, Result, err,
matrix::pdu::{PduBuilder, PduEvent},
utils::BoolExt,
};
use conduwuit_service::Services;
use futures::TryStreamExt; use futures::TryStreamExt;
use ruma::{ use ruma::{
OwnedEventId, RoomId, UserId, OwnedEventId, RoomId, UserId,
@ -16,7 +21,6 @@ use ruma::{
}, },
serde::Raw, serde::Raw,
}; };
use service::Services;
use crate::{Ruma, RumaResponse}; use crate::{Ruma, RumaResponse};
@ -207,7 +211,7 @@ async fn allowed_to_send_state_event(
// irreversible mistakes // irreversible mistakes
match json.deserialize_as::<RoomServerAclEventContent>() { match json.deserialize_as::<RoomServerAclEventContent>() {
| Ok(acl_content) => { | Ok(acl_content) => {
if acl_content.allow.is_empty() { if acl_content.allow_is_empty() {
return Err!(Request(BadJson(debug_warn!( return Err!(Request(BadJson(debug_warn!(
?room_id, ?room_id,
"Sending an ACL event with an empty allow key will permanently \ "Sending an ACL event with an empty allow key will permanently \
@ -216,9 +220,7 @@ async fn allowed_to_send_state_event(
)))); ))));
} }
if acl_content.deny.contains(&String::from("*")) if acl_content.deny_contains("*") && acl_content.allow_contains("*") {
&& acl_content.allow.contains(&String::from("*"))
{
return Err!(Request(BadJson(debug_warn!( return Err!(Request(BadJson(debug_warn!(
?room_id, ?room_id,
"Sending an ACL event with a deny and allow key value of \"*\" will \ "Sending an ACL event with a deny and allow key value of \"*\" will \
@ -227,8 +229,9 @@ async fn allowed_to_send_state_event(
)))); ))));
} }
if acl_content.deny.contains(&String::from("*")) if acl_content.deny_contains("*")
&& !acl_content.is_allowed(services.globals.server_name()) && !acl_content.is_allowed(services.globals.server_name())
&& !acl_content.allow_contains(services.globals.server_name().as_str())
{ {
return Err!(Request(BadJson(debug_warn!( return Err!(Request(BadJson(debug_warn!(
?room_id, ?room_id,
@ -238,8 +241,9 @@ async fn allowed_to_send_state_event(
)))); ))));
} }
if !acl_content.allow.contains(&String::from("*")) if !acl_content.allow_contains("*")
&& !acl_content.is_allowed(services.globals.server_name()) && !acl_content.is_allowed(services.globals.server_name())
&& !acl_content.allow_contains(services.globals.server_name().as_str())
{ {
return Err!(Request(BadJson(debug_warn!( return Err!(Request(BadJson(debug_warn!(
?room_id, ?room_id,
@ -254,7 +258,7 @@ async fn allowed_to_send_state_event(
"Room server ACL event is invalid: {e}" "Room server ACL event is invalid: {e}"
)))); ))));
}, },
}; }
}, },
| StateEventType::RoomEncryption => | StateEventType::RoomEncryption =>
// Forbid m.room.encryption if encryption is disabled // Forbid m.room.encryption if encryption is disabled

View file

@ -3,12 +3,14 @@ mod v4;
mod v5; mod v5;
use conduwuit::{ use conduwuit::{
PduCount, Error, PduCount, Result,
matrix::pdu::PduEvent,
utils::{ utils::{
IterStream, IterStream,
stream::{BroadbandExt, ReadyExt, TryIgnore}, stream::{BroadbandExt, ReadyExt, TryIgnore},
}, },
}; };
use conduwuit_service::Services;
use futures::{StreamExt, pin_mut}; use futures::{StreamExt, pin_mut};
use ruma::{ use ruma::{
RoomId, UserId, RoomId, UserId,
@ -21,7 +23,6 @@ use ruma::{
pub(crate) use self::{ pub(crate) use self::{
v3::sync_events_route, v4::sync_events_v4_route, v5::sync_events_v5_route, v3::sync_events_route, v4::sync_events_v4_route, v5::sync_events_v5_route,
}; };
use crate::{Error, PduEvent, Result, service::Services};
pub(crate) const DEFAULT_BUMP_TYPES: &[TimelineEventType; 6] = pub(crate) const DEFAULT_BUMP_TYPES: &[TimelineEventType; 6] =
&[CallInvite, PollStart, Beacon, RoomEncrypted, RoomMessage, Sticker]; &[CallInvite, PollStart, Beacon, RoomEncrypted, RoomMessage, Sticker];

View file

@ -6,15 +6,20 @@ use std::{
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
PduCount, PduEvent, Result, at, err, error, extract_variant, is_equal_to, pair_of, Result, at, err, error, extract_variant, is_equal_to,
pdu::{Event, EventHash}, matrix::{
ref_at, Event,
pdu::{EventHash, PduCount, PduEvent},
},
pair_of, ref_at,
result::FlatOk, result::FlatOk,
utils::{ utils::{
self, BoolExt, IterStream, ReadyExt, TryFutureExtExt, self, BoolExt, IterStream, ReadyExt, TryFutureExtExt,
future::OptionStream,
math::ruma_from_u64, math::ruma_from_u64,
stream::{BroadbandExt, Tools, TryExpect, WidebandExt}, stream::{BroadbandExt, Tools, TryExpect, WidebandExt},
}, },
warn,
}; };
use conduwuit_service::{ use conduwuit_service::{
Services, Services,
@ -118,7 +123,7 @@ pub(crate) async fn sync_events_route(
let (sender_user, sender_device) = body.sender(); let (sender_user, sender_device) = body.sender();
// Presence update // Presence update
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
services services
.presence .presence
.ping_presence(sender_user, &body.body.set_presence) .ping_presence(sender_user, &body.body.set_presence)
@ -219,6 +224,7 @@ pub(crate) async fn build_sync_events(
sender_user, sender_user,
next_batch, next_batch,
full_state, full_state,
filter.room.include_leave,
&filter, &filter,
) )
.map_ok(move |left_room| (room_id, left_room)) .map_ok(move |left_room| (room_id, left_room))
@ -278,8 +284,8 @@ pub(crate) async fn build_sync_events(
}); });
let presence_updates: OptionFuture<_> = services let presence_updates: OptionFuture<_> = services
.globals .config
.allow_local_presence() .allow_local_presence
.then(|| process_presence_updates(services, since, sender_user)) .then(|| process_presence_updates(services, since, sender_user))
.into(); .into();
@ -412,6 +418,7 @@ async fn handle_left_room(
sender_user: &UserId, sender_user: &UserId,
next_batch: u64, next_batch: u64,
full_state: bool, full_state: bool,
include_leave: bool,
filter: &FilterDefinition, filter: &FilterDefinition,
) -> Result<Option<LeftRoom>> { ) -> Result<Option<LeftRoom>> {
let left_count = services let left_count = services
@ -426,9 +433,12 @@ async fn handle_left_room(
return Ok(None); return Ok(None);
} }
if !services.rooms.metadata.exists(room_id).await { if !services.rooms.metadata.exists(room_id).await
|| services.rooms.metadata.is_disabled(room_id).await
|| services.rooms.metadata.is_banned(room_id).await
{
// This is just a rejected invite, not a room we know // This is just a rejected invite, not a room we know
// Insert a leave event anyways // Insert a leave event anyways for the client
let event = PduEvent { let event = PduEvent {
event_id: EventId::new(services.globals.server_name()), event_id: EventId::new(services.globals.server_name()),
sender: sender_user.to_owned(), sender: sender_user.to_owned(),
@ -459,7 +469,7 @@ async fn handle_left_room(
events: Vec::new(), events: Vec::new(),
}, },
state: RoomState { state: RoomState {
events: vec![event.to_sync_state_event()], events: vec![event.into_sync_state_event()],
}, },
})); }));
} }
@ -487,7 +497,7 @@ async fn handle_left_room(
.room_state_get_id(room_id, &StateEventType::RoomMember, sender_user.as_str()) .room_state_get_id(room_id, &StateEventType::RoomMember, sender_user.as_str())
.await .await
else { else {
error!("Left room but no left state event"); warn!("Left {room_id} but no left state event");
return Ok(None); return Ok(None);
}; };
@ -497,7 +507,7 @@ async fn handle_left_room(
.pdu_shortstatehash(&left_event_id) .pdu_shortstatehash(&left_event_id)
.await .await
else { else {
error!(event_id = %left_event_id, "Leave event has no state"); warn!(event_id = %left_event_id, "Leave event has no state in {room_id}");
return Ok(None); return Ok(None);
}; };
@ -540,7 +550,11 @@ async fn handle_left_room(
continue; continue;
}; };
left_state_events.push(pdu.to_sync_state_event()); if !include_leave && pdu.sender == sender_user {
continue;
}
left_state_events.push(pdu.into_sync_state_event());
} }
} }
@ -859,8 +873,8 @@ async fn load_joined_room(
}, },
state: RoomState { state: RoomState {
events: state_events events: state_events
.iter() .into_iter()
.map(PduEvent::to_sync_state_event) .map(PduEvent::into_sync_state_event)
.collect(), .collect(),
}, },
ephemeral: Ephemeral { events: edus }, ephemeral: Ephemeral { events: edus },
@ -1023,7 +1037,7 @@ async fn calculate_state_incremental<'a>(
}) })
.into(); .into();
let state_diff: OptionFuture<_> = (!full_state && state_changed) let state_diff_ids: OptionFuture<_> = (!full_state && state_changed)
.then(|| { .then(|| {
StreamExt::into_future( StreamExt::into_future(
services services
@ -1048,45 +1062,9 @@ async fn calculate_state_incremental<'a>(
}) })
.into(); .into();
let lazy_state_ids = lazy_state_ids
.map(|opt| {
opt.map(|(curr, next)| {
let opt = curr;
let iter = Option::into_iter(opt);
IterStream::stream(iter).chain(next)
})
})
.map(Option::into_iter)
.map(IterStream::stream)
.flatten_stream()
.flatten();
let state_diff_ids = state_diff
.map(|opt| {
opt.map(|(curr, next)| {
let opt = curr;
let iter = Option::into_iter(opt);
IterStream::stream(iter).chain(next)
})
})
.map(Option::into_iter)
.map(IterStream::stream)
.flatten_stream()
.flatten();
let state_events = current_state_ids let state_events = current_state_ids
.map(|opt| { .stream()
opt.map(|(curr, next)| { .chain(state_diff_ids.stream())
let opt = curr;
let iter = Option::into_iter(opt);
IterStream::stream(iter).chain(next)
})
})
.map(Option::into_iter)
.map(IterStream::stream)
.flatten_stream()
.flatten()
.chain(state_diff_ids)
.broad_filter_map(|(shortstatekey, shorteventid)| async move { .broad_filter_map(|(shortstatekey, shorteventid)| async move {
if witness.is_none() || encrypted_room { if witness.is_none() || encrypted_room {
return Some(shorteventid); return Some(shorteventid);
@ -1094,7 +1072,7 @@ async fn calculate_state_incremental<'a>(
lazy_filter(services, sender_user, shortstatekey, shorteventid).await lazy_filter(services, sender_user, shortstatekey, shorteventid).await
}) })
.chain(lazy_state_ids) .chain(lazy_state_ids.stream())
.broad_filter_map(|shorteventid| { .broad_filter_map(|shorteventid| {
services services
.rooms .rooms

View file

@ -6,7 +6,7 @@ use std::{
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Error, PduCount, Result, debug, error, extract_variant, Error, PduCount, PduEvent, Result, debug, error, extract_variant,
utils::{ utils::{
BoolExt, IterStream, ReadyExt, TryFutureExtExt, BoolExt, IterStream, ReadyExt, TryFutureExtExt,
math::{ruma_from_usize, usize_from_ruma, usize_from_u64_truncated}, math::{ruma_from_usize, usize_from_ruma, usize_from_u64_truncated},
@ -438,7 +438,10 @@ pub(crate) async fn sync_events_v4_route(
let mut known_subscription_rooms = BTreeSet::new(); let mut known_subscription_rooms = BTreeSet::new();
for (room_id, room) in &body.room_subscriptions { for (room_id, room) in &body.room_subscriptions {
if !services.rooms.metadata.exists(room_id).await { if !services.rooms.metadata.exists(room_id).await
|| services.rooms.metadata.is_disabled(room_id).await
|| services.rooms.metadata.is_banned(room_id).await
{
continue; continue;
} }
let todo_room = let todo_room =
@ -634,7 +637,7 @@ pub(crate) async fn sync_events_v4_route(
.state_accessor .state_accessor
.room_state_get(room_id, &state.0, &state.1) .room_state_get(room_id, &state.0, &state.1)
.await .await
.map(|s| s.to_sync_state_event()) .map(PduEvent::into_sync_state_event)
.ok() .ok()
}) })
.collect() .collect()

View file

@ -6,13 +6,19 @@ use std::{
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Error, Result, TypeStateKey, debug, error, extract_variant, trace, Error, Result, debug, error, extract_variant,
matrix::{
TypeStateKey,
pdu::{PduCount, PduEvent},
},
trace,
utils::{ utils::{
BoolExt, IterStream, ReadyExt, TryFutureExtExt, BoolExt, IterStream, ReadyExt, TryFutureExtExt,
math::{ruma_from_usize, usize_from_ruma}, math::{ruma_from_usize, usize_from_ruma},
}, },
warn, warn,
}; };
use conduwuit_service::rooms::read_receipt::pack_receipts;
use futures::{FutureExt, StreamExt, TryFutureExt}; use futures::{FutureExt, StreamExt, TryFutureExt};
use ruma::{ use ruma::{
DeviceId, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId, DeviceId, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId,
@ -27,7 +33,6 @@ use ruma::{
serde::Raw, serde::Raw,
uint, uint,
}; };
use service::{PduCount, rooms::read_receipt::pack_receipts};
use super::{filter_rooms, share_encrypted_room}; use super::{filter_rooms, share_encrypted_room};
use crate::{ use crate::{
@ -214,7 +219,10 @@ async fn fetch_subscriptions(
) { ) {
let mut known_subscription_rooms = BTreeSet::new(); let mut known_subscription_rooms = BTreeSet::new();
for (room_id, room) in &body.room_subscriptions { for (room_id, room) in &body.room_subscriptions {
if !services.rooms.metadata.exists(room_id).await { if !services.rooms.metadata.exists(room_id).await
|| services.rooms.metadata.is_disabled(room_id).await
|| services.rooms.metadata.is_banned(room_id).await
{
continue; continue;
} }
let todo_room = let todo_room =
@ -507,7 +515,7 @@ async fn process_rooms(
.state_accessor .state_accessor
.room_state_get(room_id, &state.0, &state.1) .room_state_get(room_id, &state.0, &state.1)
.await .await
.map(|s| s.to_sync_state_event()) .map(PduEvent::into_sync_state_event)
.ok() .ok()
}) })
.collect() .collect()

View file

@ -1,6 +1,7 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::Result;
use ruma::{ use ruma::{
api::client::tag::{create_tag, delete_tag, get_tags}, api::client::tag::{create_tag, delete_tag, get_tags},
events::{ events::{
@ -9,7 +10,7 @@ use ruma::{
}, },
}; };
use crate::{Result, Ruma}; use crate::Ruma;
/// # `PUT /_matrix/client/r0/user/{userId}/rooms/{roomId}/tags/{tag}` /// # `PUT /_matrix/client/r0/user/{userId}/rooms/{roomId}/tags/{tag}`
/// ///

View file

@ -1,8 +1,9 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use conduwuit::Result;
use ruma::api::client::thirdparty::get_protocols; use ruma::api::client::thirdparty::get_protocols;
use crate::{Result, Ruma, RumaResponse}; use crate::{Ruma, RumaResponse};
/// # `GET /_matrix/client/r0/thirdparty/protocols` /// # `GET /_matrix/client/r0/thirdparty/protocols`
/// ///

View file

@ -1,9 +1,12 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{PduCount, PduEvent, at}; use conduwuit::{
Result, at,
matrix::pdu::{PduCount, PduEvent},
};
use futures::StreamExt; use futures::StreamExt;
use ruma::{api::client::threads::get_threads, uint}; use ruma::{api::client::threads::get_threads, uint};
use crate::{Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/client/r0/rooms/{roomId}/threads` /// # `GET /_matrix/client/r0/rooms/{roomId}/threads`
pub(crate) async fn get_threads_route( pub(crate) async fn get_threads_route(
@ -53,7 +56,7 @@ pub(crate) async fn get_threads_route(
chunk: threads chunk: threads
.into_iter() .into_iter()
.map(at!(1)) .map(at!(1))
.map(|pdu| pdu.to_room_event()) .map(PduEvent::into_room_event)
.collect(), .collect(),
}) })
} }

View file

@ -2,6 +2,7 @@ use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Error, Result}; use conduwuit::{Error, Result};
use conduwuit_service::sending::EduBuf;
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
api::{ api::{
@ -10,7 +11,6 @@ use ruma::{
}, },
to_device::DeviceIdOrAllDevices, to_device::DeviceIdOrAllDevices,
}; };
use service::sending::EduBuf;
use crate::Ruma; use crate::Ruma;

View file

@ -1,8 +1,8 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, utils::math::Tried}; use conduwuit::{Err, Result, utils, utils::math::Tried};
use ruma::api::client::typing::create_typing_event; use ruma::api::client::typing::create_typing_event;
use crate::{Result, Ruma, utils}; use crate::Ruma;
/// # `PUT /_matrix/client/r0/rooms/{roomId}/typing/{userId}` /// # `PUT /_matrix/client/r0/rooms/{roomId}/typing/{userId}`
/// ///
@ -64,7 +64,7 @@ pub(crate) async fn create_typing_event_route(
} }
// ping presence // ping presence
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
services services
.presence .presence
.ping_presence(&body.user_id, &ruma::presence::PresenceState::Online) .ping_presence(&body.user_id, &ruma::presence::PresenceState::Online)

View file

@ -2,7 +2,7 @@ use std::collections::BTreeMap;
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::Err; use conduwuit::{Err, Error, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
OwnedRoomId, OwnedRoomId,
@ -14,16 +14,14 @@ use ruma::{
delete_profile_key, delete_timezone_key, get_profile_key, get_timezone_key, delete_profile_key, delete_timezone_key, get_profile_key, get_timezone_key,
set_profile_key, set_timezone_key, set_profile_key, set_timezone_key,
}, },
room::get_summary,
}, },
federation, federation,
}, },
events::room::member::MembershipState,
presence::PresenceState, presence::PresenceState,
}; };
use super::{update_avatar_url, update_displayname}; use super::{update_avatar_url, update_displayname};
use crate::{Error, Result, Ruma, RumaResponse}; use crate::Ruma;
/// # `GET /_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms` /// # `GET /_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`
/// ///
@ -38,13 +36,10 @@ pub(crate) async fn get_mutual_rooms_route(
InsecureClientIp(client): InsecureClientIp, InsecureClientIp(client): InsecureClientIp,
body: Ruma<mutual_rooms::unstable::Request>, body: Ruma<mutual_rooms::unstable::Request>,
) -> Result<mutual_rooms::unstable::Response> { ) -> Result<mutual_rooms::unstable::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user();
if sender_user == &body.user_id { if sender_user == body.user_id {
return Err(Error::BadRequest( return Err!(Request(Unknown("You cannot request rooms in common with yourself.")));
ErrorKind::Unknown,
"You cannot request rooms in common with yourself.",
));
} }
if !services.users.exists(&body.user_id).await { if !services.users.exists(&body.user_id).await {
@ -65,129 +60,6 @@ pub(crate) async fn get_mutual_rooms_route(
}) })
} }
/// # `GET /_matrix/client/unstable/im.nheko.summary/rooms/{roomIdOrAlias}/summary`
///
/// Returns a short description of the state of a room.
///
/// This is the "wrong" endpoint that some implementations/clients may use
/// according to the MSC. Request and response bodies are the same as
/// `get_room_summary`.
///
/// An implementation of [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266)
pub(crate) async fn get_room_summary_legacy(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_summary::msc3266::Request>,
) -> Result<RumaResponse<get_summary::msc3266::Response>> {
get_room_summary(State(services), InsecureClientIp(client), body)
.await
.map(RumaResponse)
}
/// # `GET /_matrix/client/unstable/im.nheko.summary/summary/{roomIdOrAlias}`
///
/// Returns a short description of the state of a room.
///
/// TODO: support fetching remote room info if we don't know the room
///
/// An implementation of [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266)
#[tracing::instrument(skip_all, fields(%client), name = "room_summary")]
pub(crate) async fn get_room_summary(
State(services): State<crate::State>,
InsecureClientIp(client): InsecureClientIp,
body: Ruma<get_summary::msc3266::Request>,
) -> Result<get_summary::msc3266::Response> {
let sender_user = body.sender_user.as_ref();
let room_id = services.rooms.alias.resolve(&body.room_id_or_alias).await?;
if !services.rooms.metadata.exists(&room_id).await {
return Err(Error::BadRequest(ErrorKind::NotFound, "Room is unknown to this server"));
}
if sender_user.is_none()
&& !services
.rooms
.state_accessor
.is_world_readable(&room_id)
.await
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Room is not world readable, authentication is required",
));
}
Ok(get_summary::msc3266::Response {
room_id: room_id.clone(),
canonical_alias: services
.rooms
.state_accessor
.get_canonical_alias(&room_id)
.await
.ok(),
avatar_url: services
.rooms
.state_accessor
.get_avatar(&room_id)
.await
.into_option()
.unwrap_or_default()
.url,
guest_can_join: services.rooms.state_accessor.guest_can_join(&room_id).await,
name: services.rooms.state_accessor.get_name(&room_id).await.ok(),
num_joined_members: services
.rooms
.state_cache
.room_joined_count(&room_id)
.await
.unwrap_or(0)
.try_into()?,
topic: services
.rooms
.state_accessor
.get_room_topic(&room_id)
.await
.ok(),
world_readable: services
.rooms
.state_accessor
.is_world_readable(&room_id)
.await,
join_rule: services
.rooms
.state_accessor
.get_join_rule(&room_id)
.await
.unwrap_or_default()
.0,
room_type: services
.rooms
.state_accessor
.get_room_type(&room_id)
.await
.ok(),
room_version: services.rooms.state.get_room_version(&room_id).await.ok(),
membership: if let Some(sender_user) = sender_user {
services
.rooms
.state_accessor
.get_member(&room_id, sender_user)
.await
.map_or_else(|_| MembershipState::Leave, |content| content.membership)
.into()
} else {
None
},
encryption: services
.rooms
.state_accessor
.get_room_encryption(&room_id)
.await
.ok(),
})
}
/// # `DELETE /_matrix/client/unstable/uk.tcpip.msc4133/profile/:user_id/us.cloke.msc4175.tz` /// # `DELETE /_matrix/client/unstable/uk.tcpip.msc4133/profile/:user_id/us.cloke.msc4175.tz`
/// ///
/// Deletes the `tz` (timezone) of a user, as per MSC4133 and MSC4175. /// Deletes the `tz` (timezone) of a user, as per MSC4133 and MSC4175.
@ -205,7 +77,7 @@ pub(crate) async fn delete_timezone_key_route(
services.users.set_timezone(&body.user_id, None); services.users.set_timezone(&body.user_id, None);
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence
@ -233,7 +105,7 @@ pub(crate) async fn set_timezone_key_route(
services.users.set_timezone(&body.user_id, body.tz.clone()); services.users.set_timezone(&body.user_id, body.tz.clone());
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence
@ -326,7 +198,7 @@ pub(crate) async fn set_profile_key_route(
); );
} }
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence
@ -385,7 +257,7 @@ pub(crate) async fn delete_profile_key_route(
.set_profile_key(&body.user_id, &body.key_name, None); .set_profile_key(&body.user_id, &body.key_name, None);
} }
if services.globals.allow_local_presence() { if services.config.allow_local_presence {
// Presence update // Presence update
services services
.presence .presence

View file

@ -1,10 +1,11 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use axum::{Json, extract::State, response::IntoResponse}; use axum::{Json, extract::State, response::IntoResponse};
use conduwuit::Result;
use futures::StreamExt; use futures::StreamExt;
use ruma::api::client::discovery::get_supported_versions; use ruma::api::client::discovery::get_supported_versions;
use crate::{Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/client/versions` /// # `GET /_matrix/client/versions`
/// ///

View file

@ -1,15 +1,19 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::utils::TryFutureExtExt; use conduwuit::{
use futures::{StreamExt, pin_mut}; Result,
utils::{future::BoolExt, stream::BroadbandExt},
};
use futures::{FutureExt, StreamExt, pin_mut};
use ruma::{ use ruma::{
api::client::user_directory::search_users, api::client::user_directory::search_users::{self},
events::{ events::room::join_rules::JoinRule,
StateEventType,
room::join_rules::{JoinRule, RoomJoinRulesEventContent},
},
}; };
use crate::{Result, Ruma}; use crate::Ruma;
// conduwuit can handle a lot more results than synapse
const LIMIT_MAX: usize = 500;
const LIMIT_DEFAULT: usize = 10;
/// # `POST /_matrix/client/r0/user_directory/search` /// # `POST /_matrix/client/r0/user_directory/search`
/// ///
@ -21,78 +25,63 @@ pub(crate) async fn search_users_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<search_users::v3::Request>, body: Ruma<search_users::v3::Request>,
) -> Result<search_users::v3::Response> { ) -> Result<search_users::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user();
let limit = usize::try_from(body.limit).map_or(10, usize::from).min(100); // default limit is 10 let limit = usize::try_from(body.limit)
.map_or(LIMIT_DEFAULT, usize::from)
.min(LIMIT_MAX);
let users = services.users.stream().filter_map(|user_id| async { let mut users = services
// Filter out buggy users (they should not exist, but you never know...) .users
let user = search_users::v3::User { .stream()
user_id: user_id.to_owned(), .map(ToOwned::to_owned)
display_name: services.users.displayname(user_id).await.ok(), .broad_filter_map(async |user_id| {
avatar_url: services.users.avatar_url(user_id).await.ok(), let user = search_users::v3::User {
}; user_id: user_id.clone(),
display_name: services.users.displayname(&user_id).await.ok(),
avatar_url: services.users.avatar_url(&user_id).await.ok(),
};
let user_id_matches = user let user_id_matches = user
.user_id .user_id
.to_string() .as_str()
.to_lowercase() .to_lowercase()
.contains(&body.search_term.to_lowercase()); .contains(&body.search_term.to_lowercase());
let user_displayname_matches = user let user_displayname_matches = user.display_name.as_ref().is_some_and(|name| {
.display_name
.as_ref()
.filter(|name| {
name.to_lowercase() name.to_lowercase()
.contains(&body.search_term.to_lowercase()) .contains(&body.search_term.to_lowercase())
}) });
.is_some();
if !user_id_matches && !user_displayname_matches { if !user_id_matches && !user_displayname_matches {
return None; return None;
} }
// It's a matching user, but is the sender allowed to see them? let user_in_public_room = services
let mut user_visible = false;
let user_is_in_public_rooms = services
.rooms
.state_cache
.rooms_joined(&user.user_id)
.any(|room| {
services
.rooms
.state_accessor
.room_state_get_content::<RoomJoinRulesEventContent>(
room,
&StateEventType::RoomJoinRules,
"",
)
.map_ok_or(false, |content| content.join_rule == JoinRule::Public)
})
.await;
if user_is_in_public_rooms {
user_visible = true;
} else {
let user_is_in_shared_rooms = services
.rooms .rooms
.state_cache .state_cache
.user_sees_user(sender_user, &user.user_id) .rooms_joined(&user_id)
.await; .map(ToOwned::to_owned)
.any(|room| async move {
services
.rooms
.state_accessor
.get_join_rules(&room)
.map(|rule| matches!(rule, JoinRule::Public))
.await
});
if user_is_in_shared_rooms { let user_sees_user = services
user_visible = true; .rooms
} .state_cache
} .user_sees_user(sender_user, &user_id);
user_visible.then_some(user) pin_mut!(user_in_public_room, user_sees_user);
});
pin_mut!(users); user_in_public_room.or(user_sees_user).await.then_some(user)
});
let limited = users.by_ref().next().await.is_some(); let results = users.by_ref().take(limit).collect().await;
let limited = users.next().await.is_some();
let results = users.take(limit).collect().await;
Ok(search_users::v3::Response { results, limited }) Ok(search_users::v3::Response { results, limited })
} }

View file

@ -2,12 +2,12 @@ use std::time::{Duration, SystemTime};
use axum::extract::State; use axum::extract::State;
use base64::{Engine as _, engine::general_purpose}; use base64::{Engine as _, engine::general_purpose};
use conduwuit::{Err, utils}; use conduwuit::{Err, Result, utils};
use hmac::{Hmac, Mac}; use hmac::{Hmac, Mac};
use ruma::{SecondsSinceUnixEpoch, UserId, api::client::voip::get_turn_server_info}; use ruma::{SecondsSinceUnixEpoch, UserId, api::client::voip::get_turn_server_info};
use sha1::Sha1; use sha1::Sha1;
use crate::{Result, Ruma}; use crate::Ruma;
const RANDOM_USER_ID_LENGTH: usize = 10; const RANDOM_USER_ID_LENGTH: usize = 10;

View file

@ -1,4 +1,5 @@
use axum::{Json, extract::State, response::IntoResponse}; use axum::{Json, extract::State, response::IntoResponse};
use conduwuit::{Error, Result};
use ruma::api::client::{ use ruma::api::client::{
discovery::{ discovery::{
discover_homeserver::{self, HomeserverInfo, SlidingSyncProxyInfo}, discover_homeserver::{self, HomeserverInfo, SlidingSyncProxyInfo},
@ -7,7 +8,7 @@ use ruma::api::client::{
error::ErrorKind, error::ErrorKind,
}; };
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `GET /.well-known/matrix/client` /// # `GET /.well-known/matrix/client`
/// ///

View file

@ -1,3 +1,4 @@
#![type_length_limit = "16384"] //TODO: reduce me
#![allow(clippy::toplevel_ref_arg)] #![allow(clippy::toplevel_ref_arg)]
pub mod client; pub mod client;
@ -7,8 +8,6 @@ pub mod server;
extern crate conduwuit_core as conduwuit; extern crate conduwuit_core as conduwuit;
extern crate conduwuit_service as service; extern crate conduwuit_service as service;
pub(crate) use conduwuit::{Error, Result, debug_info, pdu::PduEvent, utils};
pub(crate) use self::router::{Ruma, RumaResponse, State}; pub(crate) use self::router::{Ruma, RumaResponse, State};
conduwuit::mod_ctor! {} conduwuit::mod_ctor! {}

View file

@ -1,6 +1,7 @@
use std::{mem, ops::Deref}; use std::{mem, ops::Deref};
use axum::{async_trait, body::Body, extract::FromRequest}; use async_trait::async_trait;
use axum::{body::Body, extract::FromRequest};
use bytes::{BufMut, Bytes, BytesMut}; use bytes::{BufMut, Bytes, BytesMut};
use conduwuit::{Error, Result, debug, debug_warn, err, trace, utils::string::EMPTY}; use conduwuit::{Error, Result, debug, debug_warn, err, trace, utils::string::EMPTY};
use ruma::{ use ruma::{

View file

@ -317,10 +317,9 @@ fn auth_server_checks(services: &Services, x_matrix: &XMatrix) -> Result<()> {
let origin = &x_matrix.origin; let origin = &x_matrix.origin;
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(origin) .is_match(origin.host())
{ {
return Err!(Request(Forbidden(debug_warn!( return Err!(Request(Forbidden(debug_warn!(
"Federation requests from {origin} denied." "Federation requests from {origin} denied."

View file

@ -6,11 +6,17 @@ use conduwuit::{
utils::{IterStream, ReadyExt, stream::TryTools}, utils::{IterStream, ReadyExt, stream::TryTools},
}; };
use futures::{FutureExt, StreamExt, TryStreamExt}; use futures::{FutureExt, StreamExt, TryStreamExt};
use ruma::{MilliSecondsSinceUnixEpoch, api::federation::backfill::get_backfill, uint}; use ruma::{MilliSecondsSinceUnixEpoch, api::federation::backfill::get_backfill};
use super::AccessCheck; use super::AccessCheck;
use crate::Ruma; use crate::Ruma;
/// arbitrary number but synapse's is 100 and we can handle lots of these
/// anyways
const LIMIT_MAX: usize = 150;
/// no spec defined number but we can handle a lot of these
const LIMIT_DEFAULT: usize = 50;
/// # `GET /_matrix/federation/v1/backfill/<room_id>` /// # `GET /_matrix/federation/v1/backfill/<room_id>`
/// ///
/// Retrieves events from before the sender joined the room, if the room's /// Retrieves events from before the sender joined the room, if the room's
@ -30,9 +36,9 @@ pub(crate) async fn get_backfill_route(
let limit = body let limit = body
.limit .limit
.min(uint!(100))
.try_into() .try_into()
.expect("UInt could not be converted to usize"); .unwrap_or(LIMIT_DEFAULT)
.min(LIMIT_MAX);
let from = body let from = body
.v .v

View file

@ -1,13 +1,15 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Error, Result}; use conduwuit::{Result, debug, debug_error, utils::to_canonical_object};
use ruma::{ use ruma::api::federation::event::get_missing_events;
CanonicalJsonValue, EventId, RoomId,
api::{client::error::ErrorKind, federation::event::get_missing_events},
};
use super::AccessCheck; use super::AccessCheck;
use crate::Ruma; use crate::Ruma;
/// arbitrary number but synapse's is 20 and we can handle lots of these anyways
const LIMIT_MAX: usize = 50;
/// spec says default is 10
const LIMIT_DEFAULT: usize = 10;
/// # `POST /_matrix/federation/v1/get_missing_events/{roomId}` /// # `POST /_matrix/federation/v1/get_missing_events/{roomId}`
/// ///
/// Retrieves events that the sender is missing. /// Retrieves events that the sender is missing.
@ -24,7 +26,11 @@ pub(crate) async fn get_missing_events_route(
.check() .check()
.await?; .await?;
let limit = body.limit.try_into()?; let limit = body
.limit
.try_into()
.unwrap_or(LIMIT_DEFAULT)
.min(LIMIT_MAX);
let mut queued_events = body.latest_events.clone(); let mut queued_events = body.latest_events.clone();
// the vec will never have more entries the limit // the vec will never have more entries the limit
@ -32,60 +38,52 @@ pub(crate) async fn get_missing_events_route(
let mut i: usize = 0; let mut i: usize = 0;
while i < queued_events.len() && events.len() < limit { while i < queued_events.len() && events.len() < limit {
if let Ok(pdu) = services let Ok(pdu) = services.rooms.timeline.get_pdu(&queued_events[i]).await else {
debug!(
?body.origin,
"Event {} does not exist locally, skipping", &queued_events[i]
);
i = i.saturating_add(1);
continue;
};
if body.earliest_events.contains(&queued_events[i]) {
i = i.saturating_add(1);
continue;
}
if !services
.rooms .rooms
.timeline .state_accessor
.get_pdu_json(&queued_events[i]) .server_can_see_event(body.origin(), &body.room_id, &queued_events[i])
.await .await
{ {
let room_id_str = pdu debug!(
.get("room_id") ?body.origin,
.and_then(|val| val.as_str()) "Server cannot see {:?} in {:?}, skipping", pdu.event_id, pdu.room_id
.ok_or_else(|| Error::bad_database("Invalid event in database."))?;
let event_room_id = <&RoomId>::try_from(room_id_str)
.map_err(|_| Error::bad_database("Invalid room_id in event in database."))?;
if event_room_id != body.room_id {
return Err(Error::BadRequest(ErrorKind::InvalidParam, "Event from wrong room."));
}
if body.earliest_events.contains(&queued_events[i]) {
i = i.saturating_add(1);
continue;
}
if !services
.rooms
.state_accessor
.server_can_see_event(body.origin(), &body.room_id, &queued_events[i])
.await
{
i = i.saturating_add(1);
continue;
}
let prev_events = pdu
.get("prev_events")
.and_then(CanonicalJsonValue::as_array)
.unwrap_or_default();
queued_events.extend(
prev_events
.iter()
.map(<&EventId>::try_from)
.filter_map(Result::ok)
.map(ToOwned::to_owned),
);
events.push(
services
.sending
.convert_to_outgoing_federation_event(pdu)
.await,
); );
i = i.saturating_add(1);
continue;
} }
i = i.saturating_add(1);
let Ok(event) = to_canonical_object(&pdu) else {
debug_error!(
?body.origin,
"Failed to convert PDU in database to canonical JSON: {pdu:?}"
);
i = i.saturating_add(1);
continue;
};
let prev_events = pdu.prev_events.iter().map(ToOwned::to_owned);
let event = services
.sending
.convert_to_outgoing_federation_event(event)
.await;
queued_events.extend(prev_events);
events.push(event);
} }
Ok(get_missing_events::v1::Response { events }) Ok(get_missing_events::v1::Response { events })

View file

@ -3,9 +3,11 @@ use conduwuit::{
Err, Result, Err, Result,
utils::stream::{BroadbandExt, IterStream}, utils::stream::{BroadbandExt, IterStream},
}; };
use conduwuit_service::rooms::spaces::{
Identifier, SummaryAccessibility, get_parent_children_via,
};
use futures::{FutureExt, StreamExt}; use futures::{FutureExt, StreamExt};
use ruma::api::federation::space::get_hierarchy; use ruma::api::federation::space::get_hierarchy;
use service::rooms::spaces::{Identifier, SummaryAccessibility, get_parent_children_via};
use crate::Ruma; use crate::Ruma;

View file

@ -1,14 +1,15 @@
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use base64::{Engine as _, engine::general_purpose}; use base64::{Engine as _, engine::general_purpose};
use conduwuit::{Err, Error, PduEvent, Result, err, utils, utils::hash::sha256, warn}; use conduwuit::{
Err, Error, PduEvent, Result, err, pdu::gen_event_id, utils, utils::hash::sha256, warn,
};
use ruma::{ use ruma::{
CanonicalJsonValue, OwnedUserId, UserId, CanonicalJsonValue, OwnedUserId, UserId,
api::{client::error::ErrorKind, federation::membership::create_invite}, api::{client::error::ErrorKind, federation::membership::create_invite},
events::room::member::{MembershipState, RoomMemberEventContent}, events::room::member::{MembershipState, RoomMemberEventContent},
serde::JsonObject, serde::JsonObject,
}; };
use service::pdu::gen_event_id;
use crate::Ruma; use crate::Ruma;
@ -37,20 +38,18 @@ pub(crate) async fn create_invite_route(
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
return Err!(Request(Forbidden("Server is banned on this homeserver."))); return Err!(Request(Forbidden("Server is banned on this homeserver.")));
} }
} }
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
warn!( warn!(
"Received federated/remote invite from banned server {} for room ID {}. Rejecting.", "Received federated/remote invite from banned server {} for room ID {}. Rejecting.",
@ -103,8 +102,7 @@ pub(crate) async fn create_invite_route(
return Err!(Request(Forbidden("This room is banned on this homeserver."))); return Err!(Request(Forbidden("This room is banned on this homeserver.")));
} }
if services.globals.block_non_admin_invites() && !services.users.is_admin(&invited_user).await if services.config.block_non_admin_invites && !services.users.is_admin(&invited_user).await {
{
return Err!(Request(Forbidden("This server does not allow room invites."))); return Err!(Request(Forbidden("This server does not allow room invites.")));
} }

View file

@ -1,5 +1,8 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, debug_info, utils::IterStream, warn}; use conduwuit::{
Err, Error, Result, debug_info, matrix::pdu::PduBuilder, utils::IterStream, warn,
};
use conduwuit_service::Services;
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
CanonicalJsonObject, OwnedUserId, RoomId, RoomVersionId, UserId, CanonicalJsonObject, OwnedUserId, RoomId, RoomVersionId, UserId,
@ -14,10 +17,7 @@ use ruma::{
}; };
use serde_json::value::to_raw_value; use serde_json::value::to_raw_value;
use crate::{ use crate::Ruma;
Error, Result, Ruma,
service::{Services, pdu::PduBuilder},
};
/// # `GET /_matrix/federation/v1/make_join/{roomId}/{userId}` /// # `GET /_matrix/federation/v1/make_join/{roomId}/{userId}`
/// ///
@ -42,10 +42,9 @@ pub(crate) async fn create_join_event_template_route(
.await?; .await?;
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
warn!( warn!(
"Server {} for remote user {} tried joining room ID {} which has a server name that \ "Server {} for remote user {} tried joining room ID {} which has a server name that \
@ -59,10 +58,9 @@ pub(crate) async fn create_join_event_template_route(
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
return Err!(Request(Forbidden(warn!( return Err!(Request(Forbidden(warn!(
"Room ID server name {server} is banned on this homeserver." "Room ID server name {server} is banned on this homeserver."

View file

@ -1,15 +1,14 @@
use RoomVersionId::*; use RoomVersionId::*;
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, debug_warn}; use conduwuit::{Err, Error, Result, debug_warn, matrix::pdu::PduBuilder, warn};
use ruma::{ use ruma::{
RoomVersionId, RoomVersionId,
api::{client::error::ErrorKind, federation::knock::create_knock_event_template}, api::{client::error::ErrorKind, federation::knock::create_knock_event_template},
events::room::member::{MembershipState, RoomMemberEventContent}, events::room::member::{MembershipState, RoomMemberEventContent},
}; };
use serde_json::value::to_raw_value; use serde_json::value::to_raw_value;
use tracing::warn;
use crate::{Error, Result, Ruma, service::pdu::PduBuilder}; use crate::Ruma;
/// # `GET /_matrix/federation/v1/make_knock/{roomId}/{userId}` /// # `GET /_matrix/federation/v1/make_knock/{roomId}/{userId}`
/// ///
@ -34,10 +33,9 @@ pub(crate) async fn create_knock_event_template_route(
.await?; .await?;
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
warn!( warn!(
"Server {} for remote user {} tried knocking room ID {} which has a server name \ "Server {} for remote user {} tried knocking room ID {} which has a server name \
@ -51,10 +49,9 @@ pub(crate) async fn create_knock_event_template_route(
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
return Err!(Request(Forbidden("Server is banned on this homeserver."))); return Err!(Request(Forbidden("Server is banned on this homeserver.")));
} }

View file

@ -1,5 +1,5 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, Result}; use conduwuit::{Err, Result, matrix::pdu::PduBuilder};
use ruma::{ use ruma::{
api::federation::membership::prepare_leave_event, api::federation::membership::prepare_leave_event,
events::room::member::{MembershipState, RoomMemberEventContent}, events::room::member::{MembershipState, RoomMemberEventContent},
@ -7,7 +7,7 @@ use ruma::{
use serde_json::value::to_raw_value; use serde_json::value::to_raw_value;
use super::make_join::maybe_strip_event_id; use super::make_join::maybe_strip_event_id;
use crate::{Ruma, service::pdu::PduBuilder}; use crate::Ruma;
/// # `GET /_matrix/federation/v1/make_leave/{roomId}/{eventId}` /// # `GET /_matrix/federation/v1/make_leave/{roomId}/{eventId}`
/// ///

View file

@ -1,7 +1,8 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::Result;
use ruma::api::federation::openid::get_openid_userinfo; use ruma::api::federation::openid::get_openid_userinfo;
use crate::{Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/federation/v1/openid/userinfo` /// # `GET /_matrix/federation/v1/openid/userinfo`
/// ///

View file

@ -1,5 +1,6 @@
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{Error, Result};
use ruma::{ use ruma::{
api::{ api::{
client::error::ErrorKind, client::error::ErrorKind,
@ -8,7 +9,7 @@ use ruma::{
directory::Filter, directory::Filter,
}; };
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `POST /_matrix/federation/v1/publicRooms` /// # `POST /_matrix/federation/v1/publicRooms`
/// ///

View file

@ -9,11 +9,15 @@ use conduwuit::{
result::LogErr, result::LogErr,
trace, trace,
utils::{ utils::{
IterStream, ReadyExt, IterStream, ReadyExt, millis_since_unix_epoch,
stream::{BroadbandExt, TryBroadbandExt, automatic_width}, stream::{BroadbandExt, TryBroadbandExt, automatic_width},
}, },
warn, warn,
}; };
use conduwuit_service::{
Services,
sending::{EDU_LIMIT, PDU_LIMIT},
};
use futures::{FutureExt, Stream, StreamExt, TryFutureExt, TryStreamExt}; use futures::{FutureExt, Stream, StreamExt, TryFutureExt, TryStreamExt};
use itertools::Itertools; use itertools::Itertools;
use ruma::{ use ruma::{
@ -33,16 +37,8 @@ use ruma::{
serde::Raw, serde::Raw,
to_device::DeviceIdOrAllDevices, to_device::DeviceIdOrAllDevices,
}; };
use service::{
Services,
sending::{EDU_LIMIT, PDU_LIMIT},
};
use utils::millis_since_unix_epoch;
use crate::{ use crate::Ruma;
Ruma,
utils::{self},
};
type ResolvedMap = BTreeMap<OwnedEventId, Result>; type ResolvedMap = BTreeMap<OwnedEventId, Result>;
type Pdu = (OwnedRoomId, OwnedEventId, CanonicalJsonObject); type Pdu = (OwnedRoomId, OwnedEventId, CanonicalJsonObject);

View file

@ -9,6 +9,7 @@ use conduwuit::{
utils::stream::{IterStream, TryBroadbandExt}, utils::stream::{IterStream, TryBroadbandExt},
warn, warn,
}; };
use conduwuit_service::Services;
use futures::{FutureExt, StreamExt, TryStreamExt}; use futures::{FutureExt, StreamExt, TryStreamExt};
use ruma::{ use ruma::{
CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName, OwnedUserId, RoomId, CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName, OwnedUserId, RoomId,
@ -20,7 +21,6 @@ use ruma::{
}, },
}; };
use serde_json::value::{RawValue as RawJsonValue, to_raw_value}; use serde_json::value::{RawValue as RawJsonValue, to_raw_value};
use service::Services;
use crate::Ruma; use crate::Ruma;
@ -268,10 +268,9 @@ pub(crate) async fn create_join_event_v1_route(
body: Ruma<create_join_event::v1::Request>, body: Ruma<create_join_event::v1::Request>,
) -> Result<create_join_event::v1::Response> { ) -> Result<create_join_event::v1::Response> {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
warn!( warn!(
"Server {} tried joining room ID {} through us who has a server name that is \ "Server {} tried joining room ID {} through us who has a server name that is \
@ -284,10 +283,9 @@ pub(crate) async fn create_join_event_v1_route(
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
warn!( warn!(
"Server {} tried joining room ID {} through us which has a server name that is \ "Server {} tried joining room ID {} through us which has a server name that is \
@ -316,20 +314,18 @@ pub(crate) async fn create_join_event_v2_route(
body: Ruma<create_join_event::v2::Request>, body: Ruma<create_join_event::v2::Request>,
) -> Result<create_join_event::v2::Response> { ) -> Result<create_join_event::v2::Response> {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
return Err!(Request(Forbidden("Server is banned on this homeserver."))); return Err!(Request(Forbidden("Server is banned on this homeserver.")));
} }
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
warn!( warn!(
"Server {} tried joining room ID {} through us which has a server name that is \ "Server {} tried joining room ID {} through us which has a server name that is \

View file

@ -1,5 +1,9 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, PduEvent, Result, err, pdu::gen_event_id_canonical_json, warn}; use conduwuit::{
Err, Result, err,
matrix::pdu::{PduEvent, gen_event_id_canonical_json},
warn,
};
use futures::FutureExt; use futures::FutureExt;
use ruma::{ use ruma::{
OwnedServerName, OwnedUserId, OwnedServerName, OwnedUserId,
@ -22,10 +26,9 @@ pub(crate) async fn create_knock_event_v1_route(
body: Ruma<send_knock::v1::Request>, body: Ruma<send_knock::v1::Request>,
) -> Result<send_knock::v1::Response> { ) -> Result<send_knock::v1::Response> {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(body.origin()) .is_match(body.origin().host())
{ {
warn!( warn!(
"Server {} tried knocking room ID {} who has a server name that is globally \ "Server {} tried knocking room ID {} who has a server name that is globally \
@ -38,10 +41,9 @@ pub(crate) async fn create_knock_event_v1_route(
if let Some(server) = body.room_id.server_name() { if let Some(server) = body.room_id.server_name() {
if services if services
.server
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.contains(&server.to_owned()) .is_match(server.host())
{ {
warn!( warn!(
"Server {} tried knocking room ID {} which has a server name that is globally \ "Server {} tried knocking room ID {} which has a server name that is globally \

View file

@ -1,7 +1,8 @@
#![allow(deprecated)] #![allow(deprecated)]
use axum::extract::State; use axum::extract::State;
use conduwuit::{Err, Result, err}; use conduwuit::{Err, Result, err, matrix::pdu::gen_event_id_canonical_json};
use conduwuit_service::Services;
use futures::FutureExt; use futures::FutureExt;
use ruma::{ use ruma::{
OwnedRoomId, OwnedUserId, RoomId, ServerName, OwnedRoomId, OwnedUserId, RoomId, ServerName,
@ -13,10 +14,7 @@ use ruma::{
}; };
use serde_json::value::RawValue as RawJsonValue; use serde_json::value::RawValue as RawJsonValue;
use crate::{ use crate::Ruma;
Ruma,
service::{Services, pdu::gen_event_id_canonical_json},
};
/// # `PUT /_matrix/federation/v1/send_leave/{roomId}/{eventId}` /// # `PUT /_matrix/federation/v1/send_leave/{roomId}/{eventId}`
/// ///

View file

@ -1,6 +1,7 @@
use conduwuit::Result;
use ruma::api::federation::discovery::get_server_version; use ruma::api::federation::discovery::get_server_version;
use crate::{Result, Ruma}; use crate::Ruma;
/// # `GET /_matrix/federation/v1/version` /// # `GET /_matrix/federation/v1/version`
/// ///

View file

@ -1,7 +1,8 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{Error, Result};
use ruma::api::{client::error::ErrorKind, federation::discovery::discover_homeserver}; use ruma::api::{client::error::ErrorKind, federation::discovery::discover_homeserver};
use crate::{Error, Result, Ruma}; use crate::Ruma;
/// # `GET /.well-known/matrix/server` /// # `GET /.well-known/matrix/server`
/// ///

View file

@ -59,6 +59,7 @@ conduwuit_mods = [
argon2.workspace = true argon2.workspace = true
arrayvec.workspace = true arrayvec.workspace = true
axum.workspace = true axum.workspace = true
axum-extra.workspace = true
bytes.workspace = true bytes.workspace = true
bytesize.workspace = true bytesize.workspace = true
cargo_toml.workspace = true cargo_toml.workspace = true

View file

@ -8,7 +8,6 @@ use std::{
}; };
use arrayvec::ArrayVec; use arrayvec::ArrayVec;
use const_str::concat_bytes;
use tikv_jemalloc_ctl as mallctl; use tikv_jemalloc_ctl as mallctl;
use tikv_jemalloc_sys as ffi; use tikv_jemalloc_sys as ffi;
use tikv_jemallocator as jemalloc; use tikv_jemallocator as jemalloc;
@ -20,7 +19,7 @@ use crate::{
#[cfg(feature = "jemalloc_conf")] #[cfg(feature = "jemalloc_conf")]
#[unsafe(no_mangle)] #[unsafe(no_mangle)]
pub static malloc_conf: &[u8] = concat_bytes!( pub static malloc_conf: &[u8] = const_str::concat_bytes!(
"lg_extent_max_active_fit:4", "lg_extent_max_active_fit:4",
",oversize_threshold:16777216", ",oversize_threshold:16777216",
",tcache_max:2097152", ",tcache_max:2097152",
@ -336,6 +335,12 @@ where
Ok(res) Ok(res)
} }
#[tracing::instrument(
name = "get",
level = "trace"
skip_all,
fields(?key)
)]
fn get<T>(key: &Key) -> Result<T> fn get<T>(key: &Key) -> Result<T>
where where
T: Copy + Debug, T: Copy + Debug,
@ -347,6 +352,12 @@ where
unsafe { mallctl::raw::read_mib(key.as_slice()) }.map_err(map_err) unsafe { mallctl::raw::read_mib(key.as_slice()) }.map_err(map_err)
} }
#[tracing::instrument(
name = "xchg",
level = "trace"
skip_all,
fields(?key, ?val)
)]
fn xchg<T>(key: &Key, val: T) -> Result<T> fn xchg<T>(key: &Key, val: T) -> Result<T>
where where
T: Copy + Debug, T: Copy + Debug,

View file

@ -3,7 +3,7 @@ pub mod manager;
pub mod proxy; pub mod proxy;
use std::{ use std::{
collections::{BTreeMap, BTreeSet, HashSet}, collections::{BTreeMap, BTreeSet},
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr}, net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
path::{Path, PathBuf}, path::{Path, PathBuf},
}; };
@ -252,14 +252,6 @@ pub struct Config {
#[serde(default = "default_servernameevent_data_cache_capacity")] #[serde(default = "default_servernameevent_data_cache_capacity")]
pub servernameevent_data_cache_capacity: u32, pub servernameevent_data_cache_capacity: u32,
/// default: varies by system
#[serde(default = "default_server_visibility_cache_capacity")]
pub server_visibility_cache_capacity: u32,
/// default: varies by system
#[serde(default = "default_user_visibility_cache_capacity")]
pub user_visibility_cache_capacity: u32,
/// default: varies by system /// default: varies by system
#[serde(default = "default_stateinfo_cache_capacity")] #[serde(default = "default_stateinfo_cache_capacity")]
pub stateinfo_cache_capacity: u32, pub stateinfo_cache_capacity: u32,
@ -648,9 +640,9 @@ pub struct Config {
/// Default room version conduwuit will create rooms with. /// Default room version conduwuit will create rooms with.
/// ///
/// Per spec, room version 10 is the default. /// Per spec, room version 11 is the default.
/// ///
/// default: 10 /// default: 11
#[serde(default = "default_default_room_version")] #[serde(default = "default_default_room_version")]
pub default_room_version: RoomVersionId, pub default_room_version: RoomVersionId,
@ -723,7 +715,7 @@ pub struct Config {
/// Currently, conduwuit doesn't support inbound batched key requests, so /// Currently, conduwuit doesn't support inbound batched key requests, so
/// this list should only contain other Synapse servers. /// this list should only contain other Synapse servers.
/// ///
/// example: ["matrix.org", "envs.net", "tchncs.de"] /// example: ["matrix.org", "tchncs.de"]
/// ///
/// default: ["matrix.org"] /// default: ["matrix.org"]
#[serde(default = "default_trusted_servers")] #[serde(default = "default_trusted_servers")]
@ -1369,15 +1361,18 @@ pub struct Config {
#[serde(default)] #[serde(default)]
pub prune_missing_media: bool, pub prune_missing_media: bool,
/// Vector list of servers that conduwuit will refuse to download remote /// Vector list of regex patterns of server names that conduwuit will refuse
/// media from. /// to download remote media from.
///
/// example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
/// ///
/// default: [] /// default: []
#[serde(default)] #[serde(default, with = "serde_regex")]
pub prevent_media_downloads_from: HashSet<OwnedServerName>, pub prevent_media_downloads_from: RegexSet,
/// List of forbidden server names that we will block incoming AND outgoing /// List of forbidden server names via regex patterns that we will block
/// federation with, and block client room joins / remote user invites. /// incoming AND outgoing federation with, and block client room joins /
/// remote user invites.
/// ///
/// This check is applied on the room ID, room alias, sender server name, /// This check is applied on the room ID, room alias, sender server name,
/// sender user's server name, inbound federation X-Matrix origin, and /// sender user's server name, inbound federation X-Matrix origin, and
@ -1385,17 +1380,21 @@ pub struct Config {
/// ///
/// Basically "global" ACLs. /// Basically "global" ACLs.
/// ///
/// default: [] /// example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
#[serde(default)]
pub forbidden_remote_server_names: HashSet<OwnedServerName>,
/// List of forbidden server names that we will block all outgoing federated
/// room directory requests for. Useful for preventing our users from
/// wandering into bad servers or spaces.
/// ///
/// default: [] /// default: []
#[serde(default = "HashSet::new")] #[serde(default, with = "serde_regex")]
pub forbidden_remote_room_directory_server_names: HashSet<OwnedServerName>, pub forbidden_remote_server_names: RegexSet,
/// List of forbidden server names via regex patterns that we will block all
/// outgoing federated room directory requests for. Useful for preventing
/// our users from wandering into bad servers or spaces.
///
/// example: ["badserver\.tld$", "badphrase", "19dollarfortnitecards"]
///
/// default: []
#[serde(default, with = "serde_regex")]
pub forbidden_remote_room_directory_server_names: RegexSet,
/// Vector list of IPv4 and IPv6 CIDR ranges / subnets *in quotes* that you /// Vector list of IPv4 and IPv6 CIDR ranges / subnets *in quotes* that you
/// do not want conduwuit to send outbound requests to. Defaults to /// do not want conduwuit to send outbound requests to. Defaults to
@ -1516,11 +1515,10 @@ pub struct Config {
/// used, and startup as warnings if any room aliases in your database have /// used, and startup as warnings if any room aliases in your database have
/// a forbidden room alias/ID. /// a forbidden room alias/ID.
/// ///
/// example: ["19dollarfortnitecards", "b[4a]droom"] /// example: ["19dollarfortnitecards", "b[4a]droom", "badphrase"]
/// ///
/// default: [] /// default: []
#[serde(default)] #[serde(default, with = "serde_regex")]
#[serde(with = "serde_regex")]
pub forbidden_alias_names: RegexSet, pub forbidden_alias_names: RegexSet,
/// List of forbidden username patterns/strings. /// List of forbidden username patterns/strings.
@ -1532,11 +1530,10 @@ pub struct Config {
/// startup as warnings if any local users in your database have a forbidden /// startup as warnings if any local users in your database have a forbidden
/// username. /// username.
/// ///
/// example: ["administrator", "b[a4]dusernam[3e]"] /// example: ["administrator", "b[a4]dusernam[3e]", "badphrase"]
/// ///
/// default: [] /// default: []
#[serde(default)] #[serde(default, with = "serde_regex")]
#[serde(with = "serde_regex")]
pub forbidden_usernames: RegexSet, pub forbidden_usernames: RegexSet,
/// Retry failed and incomplete messages to remote servers immediately upon /// Retry failed and incomplete messages to remote servers immediately upon
@ -2035,10 +2032,6 @@ fn default_servernameevent_data_cache_capacity() -> u32 {
parallelism_scaled_u32(100_000).saturating_add(500_000) parallelism_scaled_u32(100_000).saturating_add(500_000)
} }
fn default_server_visibility_cache_capacity() -> u32 { parallelism_scaled_u32(500) }
fn default_user_visibility_cache_capacity() -> u32 { parallelism_scaled_u32(1000) }
fn default_stateinfo_cache_capacity() -> u32 { parallelism_scaled_u32(100) } fn default_stateinfo_cache_capacity() -> u32 { parallelism_scaled_u32(100) }
fn default_roomid_spacehierarchy_cache_capacity() -> u32 { parallelism_scaled_u32(1000) } fn default_roomid_spacehierarchy_cache_capacity() -> u32 { parallelism_scaled_u32(1000) }
@ -2158,7 +2151,12 @@ fn default_rocksdb_max_log_file_size() -> usize {
fn default_rocksdb_parallelism_threads() -> usize { 0 } fn default_rocksdb_parallelism_threads() -> usize { 0 }
fn default_rocksdb_compression_algo() -> String { "zstd".to_owned() } fn default_rocksdb_compression_algo() -> String {
cfg!(feature = "zstd_compression")
.then_some("zstd")
.unwrap_or("none")
.to_owned()
}
/// Default RocksDB compression level is 32767, which is internally read by /// Default RocksDB compression level is 32767, which is internally read by
/// RocksDB as the default magic number and translated to the library's default /// RocksDB as the default magic number and translated to the library's default
@ -2177,7 +2175,7 @@ fn default_rocksdb_stats_level() -> u8 { 1 }
// I know, it's a great name // I know, it's a great name
#[must_use] #[must_use]
#[inline] #[inline]
pub fn default_default_room_version() -> RoomVersionId { RoomVersionId::V10 } pub fn default_default_room_version() -> RoomVersionId { RoomVersionId::V11 }
fn default_ip_range_denylist() -> Vec<String> { fn default_ip_range_denylist() -> Vec<String> {
vec![ vec![

View file

@ -136,6 +136,7 @@ macro_rules! err_log {
} }
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! err_lev { macro_rules! err_lev {
(debug_warn) => { (debug_warn) => {
if $crate::debug::logging() { if $crate::debug::logging() {

View file

@ -81,6 +81,8 @@ pub enum Error {
#[error("Tracing reload error: {0}")] #[error("Tracing reload error: {0}")]
TracingReload(#[from] tracing_subscriber::reload::Error), TracingReload(#[from] tracing_subscriber::reload::Error),
#[error(transparent)] #[error(transparent)]
TypedHeader(#[from] axum_extra::typed_header::TypedHeaderRejection),
#[error(transparent)]
Yaml(#[from] serde_yaml::Error), Yaml(#[from] serde_yaml::Error),
// ruma/conduwuit // ruma/conduwuit

View file

@ -86,7 +86,7 @@ pub(super) fn bad_request_code(kind: &ErrorKind) -> StatusCode {
pub(super) fn ruma_error_message(error: &ruma::api::client::error::Error) -> String { pub(super) fn ruma_error_message(error: &ruma::api::client::error::Error) -> String {
if let ErrorBody::Standard { message, .. } = &error.body { if let ErrorBody::Standard { message, .. } = &error.body {
return message.to_string(); return message.clone();
} }
format!("{error}") format!("{error}")

9
src/core/matrix/mod.rs Normal file
View file

@ -0,0 +1,9 @@
//! Core Matrix Library
pub mod event;
pub mod pdu;
pub mod state_res;
pub use event::Event;
pub use pdu::{PduBuilder, PduCount, PduEvent, PduId, RawPduId, StateKey};
pub use state_res::{EventTypeExt, RoomVersion, StateMap, TypeStateKey};

View file

@ -1,7 +1,6 @@
mod builder; mod builder;
mod content; mod content;
mod count; mod count;
mod event;
mod event_id; mod event_id;
mod filter; mod filter;
mod id; mod id;
@ -17,8 +16,8 @@ mod unsigned;
use std::cmp::Ordering; use std::cmp::Ordering;
use ruma::{ use ruma::{
CanonicalJsonObject, CanonicalJsonValue, EventId, OwnedEventId, OwnedRoomId, OwnedServerName, CanonicalJsonObject, CanonicalJsonValue, EventId, MilliSecondsSinceUnixEpoch, OwnedEventId,
OwnedUserId, UInt, events::TimelineEventType, OwnedRoomId, OwnedServerName, OwnedUserId, RoomId, UInt, UserId, events::TimelineEventType,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::value::RawValue as RawJsonValue; use serde_json::value::RawValue as RawJsonValue;
@ -27,12 +26,12 @@ pub use self::{
Count as PduCount, Id as PduId, Pdu as PduEvent, RawId as RawPduId, Count as PduCount, Id as PduId, Pdu as PduEvent, RawId as RawPduId,
builder::{Builder, Builder as PduBuilder}, builder::{Builder, Builder as PduBuilder},
count::Count, count::Count,
event::Event,
event_id::*, event_id::*,
id::*, id::*,
raw_id::*, raw_id::*,
state_key::{ShortStateKey, StateKey}, state_key::{ShortStateKey, StateKey},
}; };
use super::Event;
use crate::Result; use crate::Result;
/// Persistent Data Unit (Event) /// Persistent Data Unit (Event)
@ -79,6 +78,36 @@ impl Pdu {
} }
} }
impl Event for Pdu {
type Id = OwnedEventId;
fn event_id(&self) -> &Self::Id { &self.event_id }
fn room_id(&self) -> &RoomId { &self.room_id }
fn sender(&self) -> &UserId { &self.sender }
fn event_type(&self) -> &TimelineEventType { &self.kind }
fn content(&self) -> &RawJsonValue { &self.content }
fn origin_server_ts(&self) -> MilliSecondsSinceUnixEpoch {
MilliSecondsSinceUnixEpoch(self.origin_server_ts)
}
fn state_key(&self) -> Option<&str> { self.state_key.as_deref() }
fn prev_events(&self) -> impl DoubleEndedIterator<Item = &Self::Id> + Send + '_ {
self.prev_events.iter()
}
fn auth_events(&self) -> impl DoubleEndedIterator<Item = &Self::Id> + Send + '_ {
self.auth_events.iter()
}
fn redacts(&self) -> Option<&Self::Id> { self.redacts.as_ref() }
}
/// Prevent derived equality which wouldn't limit itself to event_id /// Prevent derived equality which wouldn't limit itself to event_id
impl Eq for Pdu {} impl Eq for Pdu {}
@ -87,12 +116,12 @@ impl PartialEq for Pdu {
fn eq(&self, other: &Self) -> bool { self.event_id == other.event_id } fn eq(&self, other: &Self) -> bool { self.event_id == other.event_id }
} }
/// Ordering determined by the Pdu's ID, not the memory representations.
impl PartialOrd for Pdu {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) }
}
/// Ordering determined by the Pdu's ID, not the memory representations. /// Ordering determined by the Pdu's ID, not the memory representations.
impl Ord for Pdu { impl Ord for Pdu {
fn cmp(&self, other: &Self) -> Ordering { self.event_id.cmp(&other.event_id) } fn cmp(&self, other: &Self) -> Ordering { self.event_id.cmp(&other.event_id) }
} }
/// Ordering determined by the Pdu's ID, not the memory representations.
impl PartialOrd for Pdu {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) }
}

Some files were not shown because too many files have changed in this diff Show more