Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2022-10-09T11:48:48+02:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/768Cipher order and other minor security issues2022-10-09T11:48:48+02:00Zdeněk ŠvarcCipher order and other minor security issuesI suggest to work on compliance with security testing [testssl.sh](https://testssl.sh/). Especially cipher order, although this is a common testing flaw.
DoT test results follow:
```
Testing protocols via sockets except NPN+ALPN
SSL...I suggest to work on compliance with security testing [testssl.sh](https://testssl.sh/). Especially cipher order, although this is a common testing flaw.
DoT test results follow:
```
Testing protocols via sockets except NPN+ALPN
SSLv2 not offered (OK)
SSLv3 not offered (OK)
TLS 1 not offered
TLS 1.1 not offered
TLS 1.2 offered (OK)
TLS 1.3 offered (OK): final
NPN/SPDY not offered
ALPN/HTTP2 not offered
Testing cipher categories
NULL ciphers (no encryption) not offered (OK)
Anonymous NULL Ciphers (no authentication) not offered (OK)
Export ciphers (w/o ADH+NULL) not offered (OK)
LOW: 64 Bit + DES, RC[2,4] (w/o export) not offered (OK)
Triple DES Ciphers / IDEA not offered
Obsolete CBC ciphers (AES, ARIA etc.) offered
Strong encryption (AEAD ciphers) offered (OK)
Testing robust (perfect) forward secrecy, (P)FS -- omitting Null Authentication/Encryption, 3DES, RC4
PFS is offered (OK) TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-CHACHA20-POLY1305
ECDHE-ECDSA-AES256-CCM TLS_AES_128_GCM_SHA256 TLS_AES_128_CCM_SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA ECDHE-ECDSA-AES128-CCM
Elliptic curves offered: prime256v1 secp384r1 secp521r1 X25519 X448
Finite field group: ffdhe2048 ffdhe3072 ffdhe4096 ffdhe6144 ffdhe8192
Testing server preferences
Has server cipher order? no (NOT ok)
Negotiated protocol TLSv1.3
Negotiated cipher TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519) (limited sense as client will pick)
Negotiated cipher per proto (limited sense as client will pick)
ECDHE-ECDSA-AES256-GCM-SHA384: TLSv1.2
TLS_AES_256_GCM_SHA384: TLSv1.3
No further cipher order check has been done as order is determined by the client
Testing server defaults (Server Hello)
TLS extensions (standard) "EC point formats/#11" "session ticket/#35" "renegotiation info/#65281" "key share/#51" "supported versions/#43"
Session Ticket RFC 5077 hint 21600 seconds, session tickets keys seems to be rotated < daily
SSL Session ID support yes
Session Resumption Tickets: yes, ID: yes
TLS clock skew Random values, no fingerprinting possible
Signature Algorithm SHA256 with RSA
Server key size EC 384 bits
Server key usage Digital Signature
Server extended key usage TLS Web Server Authentication, TLS Web Client Authentication
Serial 047E...0D1D (OK: length 18)
Fingerprints SHA1 1A5...BAA
SHA256 53EB...DAB
Common Name (CN) --
subjectAltName (SAN) --
Issuer R3 (Let's Encrypt from US)
Trust (hostname) Ok via SAN (same w/o SNI)
Chain of trust Ok
EV cert (experimental) no
ETS/"eTLS", visibility info not present
Certificate Validity (UTC) 89 >= 30 days (2022-10-08 14:40 --> 2023-01-06 14:40)
# of certificates provided 3
Certificate Revocation List --
OCSP URI http://r3.o.lencr.org
OCSP stapling not offered
OCSP must staple extension --
DNS CAA RR (experimental) not offered
Certificate Transparency yes (certificate extension)
Testing vulnerabilities
Heartbleed (CVE-2014-0160) not vulnerable (OK), no heartbeat extension
CCS (CVE-2014-0224) not vulnerable (OK)
Ticketbleed (CVE-2016-9244), experiment. -- (applicable only for HTTPS)
ROBOT Server does not support any cipher suites that use RSA key transport
Secure Renegotiation (RFC 5746) supported (OK)
Secure Client-Initiated Renegotiation VULNERABLE (NOT ok), potential DoS threat
CRIME, TLS (CVE-2012-4929) not vulnerable (OK) (not using HTTP anyway)
POODLE, SSL (CVE-2014-3566) not vulnerable (OK), no SSLv3 support
TLS_FALLBACK_SCSV (RFC 7507) No fallback possible (OK), no protocol below TLS 1.2 offered
SWEET32 (CVE-2016-2183, CVE-2016-6329) not vulnerable (OK)
FREAK (CVE-2015-0204) not vulnerable (OK)
DROWN (CVE-2016-0800, CVE-2016-0703) not vulnerable on this host and port (OK)
no RSA certificate, thus certificate can't be used with SSLv2 elsewhere
LOGJAM (CVE-2015-4000), experimental not vulnerable (OK): no DH EXPORT ciphers, no DH key detected with <= TLS 1.2
BEAST (CVE-2011-3389) not vulnerable (OK), no SSL3 or TLS1
LUCKY13 (CVE-2013-0169), experimental potentially VULNERABLE, uses cipher block chaining (CBC) ciphers with TLS. Check patches
RC4 (CVE-2013-2566, CVE-2015-2808) no RC4 ciphers detected (OK)
Testing 370 ciphers via OpenSSL plus sockets against the server, ordered by encryption strength
Hexcode Cipher Suite Name (OpenSSL) KeyExch. Encryption Bits Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
x1302 TLS_AES_256_GCM_SHA384 ECDH 253 AESGCM 256 TLS_AES_256_GCM_SHA384
x1303 TLS_CHACHA20_POLY1305_SHA256 ECDH 253 ChaCha20 256 TLS_CHACHA20_POLY1305_SHA256
xc02c ECDHE-ECDSA-AES256-GCM-SHA384 ECDH 253 AESGCM 256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
xc00a ECDHE-ECDSA-AES256-SHA ECDH 253 AES 256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
xcca9 ECDHE-ECDSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
xc0ad ECDHE-ECDSA-AES256-CCM ECDH 253 AESCCM 256 TLS_ECDHE_ECDSA_WITH_AES_256_CCM
x1301 TLS_AES_128_GCM_SHA256 ECDH 253 AESGCM 128 TLS_AES_128_GCM_SHA256
x1304 TLS_AES_128_CCM_SHA256 ECDH 253 AESCCM 128 TLS_AES_128_CCM_SHA256
xc02b ECDHE-ECDSA-AES128-GCM-SHA256 ECDH 253 AESGCM 128 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
xc009 ECDHE-ECDSA-AES128-SHA ECDH 253 AES 128 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
xc0ac ECDHE-ECDSA-AES128-CCM ECDH 253 AESCCM 128 TLS_ECDHE_ECDSA_WITH_AES_128_CCM
Could not determine the protocol, only simulating generic clients.
Running client simulations via sockets
Android 8.1 (native) TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256, 253 bit ECDH (X25519)
Android 9.0 (native) No connection
Android 10.0 (native) No connection
Android 11 (native) No connection
Android 12 (native) No connection
Java 7u25 No connection
Java 8u161 TLSv1.2 ECDHE-ECDSA-AES256-SHA, 256 bit ECDH (P-256)
Java 11.0.2 (OpenJDK) TLSv1.3 TLS_AES_128_GCM_SHA256, 256 bit ECDH (P-256)
Java 17.0.3 (OpenJDK) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
go 1.17.8 No connection
LibreSSL 2.8.3 (Apple) TLSv1.2 ECDHE-ECDSA-CHACHA20-POLY1305, 253 bit ECDH (X25519)
OpenSSL 1.0.2e TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384, 256 bit ECDH (P-256)
OpenSSL 1.1.0l (Debian) TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384, 253 bit ECDH (X25519)
OpenSSL 1.1.1d (Debian) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
OpenSSL 3.0.3 (git) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/763I cannot start kresd with the data.mdb that I generate through lua2022-09-01T11:17:12+02:00makeI cannot start kresd with the data.mdb that I generate through luaI generate a data.mdb and set the cache maximum size 1GB, it works well. But After I increase it to 50GB and generate a new data.mdb, there are some problems for me to start it.
![image](/uploads/446655285edea1962b270cb9d7fa1fc3/image.p...I generate a data.mdb and set the cache maximum size 1GB, it works well. But After I increase it to 50GB and generate a new data.mdb, there are some problems for me to start it.
![image](/uploads/446655285edea1962b270cb9d7fa1fc3/image.png)
And before I run kresd,
![捕获](/uploads/c2db07acd5f60eef81fc7f54ff0d1b22/捕获.PNG)
But after I run kresd,
![image](/uploads/1de5d306ac89983f96776d544f948884/image.png)https://gitlab.nic.cz/knot/knot-resolver/-/issues/761logging: consider adding startup and shutdown messages2022-10-10T11:45:32+02:00Matt Taggartlogging: consider adding startup and shutdown messagesI thought I was having a problem with my kresd.log as it wasn't getting updated. Then I realized mostly only errors get logged there and nothing is printed there on service start or stop. If I did a known bad query I could cause an updat...I thought I was having a problem with my kresd.log as it wasn't getting updated. Then I realized mostly only errors get logged there and nothing is printed there on service start or stop. If I did a known bad query I could cause an update there.
Please consider log entries on start/stop. I note that kres-cache-gc already does this on startup:
```kres-cache-gc[18916]: Knot Resolver Cache Garbage Collector, version 5.5.2```
so maybe something similar for kresd, and shutting down messages for both.
Thankshttps://gitlab.nic.cz/knot/knot-resolver/-/issues/754manager: datamodel: location for default values and constants2022-07-04T17:51:07+02:00Aleš Mrázekmanager: datamodel: location for default values and constantsWe should agree on location and definition of default values and constants. Some are currently defined in the configuration schema and some outside if it.
Issue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requ...We should agree on location and definition of default values and constants. Some are currently defined in the configuration schema and some outside if it.
Issue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1280#note_256358) in !1280.https://gitlab.nic.cz/knot/knot-resolver/-/issues/752Protocol layers2022-07-18T10:01:11+02:00Oto ŠťávaProtocol layersSee snippet $1448See snippet $1448https://gitlab.nic.cz/knot/knot-resolver/-/issues/744tests/packaging: failing tests2022-06-01T14:06:50+02:00Oto Šťávatests/packaging: failing testsI'm opening this issue so that we are tracking these test fails somewhere, but I'm not sure what we can do about them.
* `centos_7`
* outdated `luarocks` (is `2.x` required `3.x`) - cannot install `process`. I've tried to resolve th...I'm opening this issue so that we are tracking these test fails somewhere, but I'm not sure what we can do about them.
* `centos_7`
* outdated `luarocks` (is `2.x` required `3.x`) - cannot install `process`. I've tried to resolve this by explicitly installing older `process`, that does not require the new `luarocks`, but it attempts to install the new version anyway
* `centos_8`
* appstream fails to prepare internal mirrorlist
* no such command `config-manager`
* `fedora_31`
* outdated `knot` - is `3.0.1`, required `3.0.2`
* `leap_15.2`
* package conflicts
Related MR: !1304 (adds better logging for failing commands)https://gitlab.nic.cz/knot/knot-resolver/-/issues/737knot-resolver crashes regularly on macOS 12.3.1 (intel and arm version), sinc...2022-06-20T11:53:32+02:00owahknot-resolver crashes regularly on macOS 12.3.1 (intel and arm version), since updating to 5.5.0Hello team,
with the latest update to knot-resolver on macOS, I've been experiencing many crashes and I don't know how to debug them, as the log stays empty.
I would be working/browsing and suddenly pages do not load anymore, if I chec...Hello team,
with the latest update to knot-resolver on macOS, I've been experiencing many crashes and I don't know how to debug them, as the log stays empty.
I would be working/browsing and suddenly pages do not load anymore, if I check on the service as you can see below, it would be in an error state. Running `brew services restart knot-resolver` fixes the issue up until the next crash.
```
$ sudo brew services 14:10:29
Password:
Name Status User File
dbus none
emacs none
knot none
knot-resolver started root /Library/LaunchDaemons/homebrew.mxcl.knot-resolver.plist
stubby none
tor none
unbound none
$ sudo brew services 14:24:54
Password:
Name Status User File
dbus none
emacs none
knot none
knot-resolver error 6 root /Library/LaunchDaemons/homebrew.mxcl.knot-resolver.plist
stubby none
tor none
unbound none
```
This is the config file I am using. DNSSEC is disabled because nextdns already validates it for me, and it used to create random SERVFAILs.
```
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
--net.listen('127.0.0.1', 853, { kind = 'tls' })
--net.listen('127.0.0.1', 443, { kind = 'doh2' })
--net.listen('::1', 53, { kind = 'dns', freebind = true })
--net.listen('::1', 853, { kind = 'tls', freebind = true })
--net.listen('::1', 443, { kind = 'doh2' })
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
log_level('err')
policy.add(policy.all(policy.TLS_FORWARD({
{'45.90.28.0', hostname='<removed>.dns1.nextdns.io'},
{'2a07:a8c0::', hostname='<removed>.dns1.nextdns.io'},
{'45.90.30.0', hostname='<removed>.dns2.nextdns.io'},
{'2a07:a8c1::', hostname='<removed>.dns2.nextdns.io'}
})))
trust_anchors.remove('.')
-- Cache size
cache.size = 100 * MB
```
My log file only shows:
```
...
[system] Knot Resolver is tested on Linux, other platforms might exhibit bugs.
Please report issues to https://gitlab.nic.cz/knot/knot-resolver/issues/
Thank you for your time and interest!
[system] Knot Resolver is tested on Linux, other platforms might exhibit bugs.
Please report issues to https://gitlab.nic.cz/knot/knot-resolver/issues/
Thank you for your time and interest!
[system] Knot Resolver is tested on Linux, other platforms might exhibit bugs.
Please report issues to https://gitlab.nic.cz/knot/knot-resolver/issues/
Thank you for your time and interest!
```
I had changed the log-level to debug already once too, but it seems that it crashes so hard, it doesn't have a chance to write anything to the log .
Any advice on how to get some better reporting going for this issue? It also surprises me that no one else reported this issue after upgrading to 5.5.0. At first I assumed that my machine is at fault, but only a few days ago I setup a brand new machine (macbook with ARM processor) and the crashing behaviour is the same.https://gitlab.nic.cz/knot/knot-resolver/-/issues/731knot resolver in docker in production2022-03-24T11:18:22+01:00elandorrknot resolver in docker in productionHello all,
I'm looking to run the resolver in prod docker. The authoritative knot works well in docker - unfortunately I saw you deprecated forking for the resolver.
Your docker image is just for testing purposes as you say and does no...Hello all,
I'm looking to run the resolver in prod docker. The authoritative knot works well in docker - unfortunately I saw you deprecated forking for the resolver.
Your docker image is just for testing purposes as you say and does not include garbage collection or a watchdog replacement.
Would you please let us know the best practices for accomplishing this?
I suppose we need:
- multiple processes for prod(as far as I can tell, in dockerland they're meant to be spawned and handled by the parent, but kresd is separate)
- therefore supervisord (Which is not all that nice, as the auth. knotd is relatively lightweight and handles itself without any additional bloat. I strive to keep things as light as possible, and wouldn't want to start creating frankensteins for the resolver either, if at all avoidable.)
- a way to run kres-cache-gc automatically from inside the container (not externally pushed as that'd be a bit of a 'hackjob')
A standard, 'official' solution would benefit many. We'd appreciate your input!
I tried to research other people's solutions, but it looks as though nobody has published about it yet. Everyone just keeps using unbound, especially in docker. I'd really like to give kresd a try as knotd is great! It even seems to be smaller than unbound.
Have a great evening!https://gitlab.nic.cz/knot/knot-resolver/-/issues/719Resolver returns SERVFAIL until restarted2023-09-28T04:49:56+02:00Jan BaierResolver returns SERVFAIL until restartedI am using knot-resolver 5.4.4-cznic.1 on Debian 10. After some (rather long) time, the resolver starts to return SERVFAIL for some records (those secured by DNSSEC).
From what I was able to find, I believe I stumbled upon a bug which m...I am using knot-resolver 5.4.4-cznic.1 on Debian 10. After some (rather long) time, the resolver starts to return SERVFAIL for some records (those secured by DNSSEC).
From what I was able to find, I believe I stumbled upon a bug which might be related to following issues:
* https://gitlab.nic.cz/knot/knot-resolver/-/issues/423
* https://gitlab.nic.cz/knot/knot-resolver/-/issues/493
It can be remediated quickly just by restarting the kresd service, which makes me thing if this is an issue in the resolver or rather in the Debian packaging (missing some restart hooks?).
From the log (full log attached) I can see:
1. There are several failed attempts to refresh trust anchors
`[taupd ] active refresh failed for . with rcode: 2`
2. After a few days (when the cache expires?) the problem starts to manifest itself and resolver starts to respond with SERVFAIL
```
[plan ][00000.00] plan 'haproxy.luffy.cx.' type 'A' uid [17896.00]
[iterat][17896.00] 'haproxy.luffy.cx.' type 'A' new uid was assigned .01, parent uid .00
[cache ][17896.01] => skipping exact RR: rank 060 (min. 030), new TTL -155800
[cache ][17896.01] => skipping unfit NS RR: rank 002, new TTL -76600
[cache ][17896.01] => skipping unfit NS RR: rank 002, new TTL -81800
[cache ][17896.01] => trying zone: ., NSEC, hash 0
[cache ][17896.01] => NSEC sname: range search miss (!covers)
[cache ][17896.01] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[zoncut][17896.01] found cut: . (rank 060 return codes: DS -2, DNSKEY -116)
[resolv][17896.01] >< TA: '.'
[plan ][17896.01] plan '.' type 'DNSKEY' uid [17896.02]
[iterat][17896.02] '.' type 'DNSKEY' new uid was assigned .03, parent uid .01
[cache ][17896.03] => skipping exact RR: rank 060 (min. 030), new TTL -5783
[cache ][17896.03] => trying zone: ., NSEC, hash 0
[cache ][17896.03] => NSEC sname: match but failed type check
[cache ][17896.03] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[select][00000.00] NO6: is KO [exploit]
[select][17896.03] => id: '28780' choosing: 'i.root-servers.net.'@'2001:7fe::53#00053' with timeout 10000 ms zone cut: '.'
[resolv][17896.03] => id: '28780' querying: 'i.root-servers.net.'@'2001:7fe::53#00053' zone cut: '.' qname: '.' qtype: 'DNSKEY' proto: 'tcp'
[worker][17896.03] => connecting to: '2001:7fe::53#00053'
[select][17896.03] NO6: timed out, but bad already
[select][17896.03] => id: '28780' noting selection error: 'i.root-servers.net.'@'2001:7fe::53#00053' zone cut: '.' error: 3 TCP_CONNECT_FAILED
[iterat][17896.03] '.' type 'DNSKEY' new uid was assigned .04, parent uid .01
[select][00000.00] NO6: is KO [exploit]
[select][17896.04] => id: '17180' choosing: 'm.root-servers.net.'@'2001:dc3::35#00053' with timeout 10000 ms zone cut: '.'
[resolv][17896.04] => id: '17180' querying: 'm.root-servers.net.'@'2001:dc3::35#00053' zone cut: '.' qname: '.' qtype: 'DNSKEY' proto: 'udp'
```
3. After restarting the service via `systemctl restart kresd@1` the problem instantly disappears
It seems to me like the resolver lost all root servers and needs a restart to reload them. Also, it might be good to mention, there is no IPv6 connectivity on the machine with the resolver.
I am not really sure, how to reproduce without waiting for a couple of days/weeks. This time, the issue appeared after 23 days.
Full log: [kresd.log](/uploads/0578a16d083ff60b5280a77ce4b99cfe/kresd.log)https://gitlab.nic.cz/knot/knot-resolver/-/issues/715integration of manager into kresd2023-09-28T04:50:13+02:00Tomas Krizekintegration of manager into kresdLet this issue be a checklist of requirements/ideas that need to be done before we're ready to merge manager into master. Feel free to edit the description and add your TODOs as well.
### Requirements
- [ ] config: verify that all valu...Let this issue be a checklist of requirements/ideas that need to be done before we're ready to merge manager into master. Feel free to edit the description and add your TODOs as well.
### Requirements
- [ ] config: verify that all values in the datamodel jinja2 templates are either (a) escaped or (b) validated before use (to prevent code injection from declarative values to lua) [goal: security - API should not be abusable] (related !1291)
- [ ] config: ensure all recently added lua configuration options have been added to declarative config as well (e.g. go through NEWS file and check) and make sure it won't be a problem in future.
- [x] new config for kresd < 5.5.0 !1289
- [ ] new config for kresd >= 5.5.0
- [x] new declarative policy module !1313
- [ ] config: update our default/example [configs](https://gitlab.nic.cz/knot/knot-resolver/-/tree/master/etc/config)
- [x] packaging: ensure all manager's dependencies have been properly added in `distro/pkg` (related !1248)
- [x] packaging: cover the most basic use-cases by packaging tests executed on all target distros (related #713)
- [ ] tests: manually test migration path on all target distros
- [x] usability: prepare [systemd files](https://gitlab.nic.cz/knot/knot-resolver/-/tree/master/systemd) for manager
- [x] usability: figure out how to support declarative config on unsupported platforms (CentOS7) and in our [docker image](https://gitlab.nic.cz/knot/knot-resolver/-/blob/master/Dockerfile) (related #734)
- [x] usability: ensure that the manager is applicable to ODVR usecase (separate workers/instances for each DNS protocol)
- [x] docs: document new way of using kresd with manager, including systemd interaction, quick start guide, declarative config docs, how to get logs etc.
### Suggestions
- [ ] tests: comprehensive unit tests of configuration: prepare a collection of example declarative configs and their lua counterparts; use CLI conversion tool to verify these
- [ ] logging: ensure logs from manager look consistent with kresd logs
- [x] logging: try to find a way to display aggregated log output
- [x] usability: support supervisord for containers
- [ ] usability: keep manager component optional (minimal use-case: only run config conversion, but use current kresd@1 approach)
- [ ] blog: blogpost(s) about the manager, comparison with `kresd@`, benefits, examples6.0.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/714meson_version needs increasing2023-09-28T04:54:08+02:00daurnimatormeson_version needs increasingThe following warning appears when building:
```
Build targets in project: 31
WARNING: Project specifies a minimum meson_version '>=0.49' but uses features which were added in newer versions:
* 0.52.0: {'priority arg in test'}
NOTICE: F...The following warning appears when building:
```
Build targets in project: 31
WARNING: Project specifies a minimum meson_version '>=0.49' but uses features which were added in newer versions:
* 0.52.0: {'priority arg in test'}
NOTICE: Future-deprecated features used:
* 0.56.0: {'Dependency.get_pkgconfig_variable'}
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/709datamodel: network: more readable 'kind' in listen interfaces2023-09-28T04:54:23+02:00Aleš Mrázekdatamodel: network: more readable 'kind' in listen interfaces- `dns-over-https` -> `doh`
- `dns-over-tls` -> `dot`- `dns-over-https` -> `doh`
- `dns-over-tls` -> `dot`https://gitlab.nic.cz/knot/knot-resolver/-/issues/707Add integration test with some complex configuration2023-09-28T04:54:43+02:00Vaclav SraierAdd integration test with some complex configurationFor example try to translate configuration from ODVR and see if it works. The ODVR configuration can be found in the discussion of issue knot-resolver-manager#38For example try to translate configuration from ODVR and see if it works. The ODVR configuration can be found in the discussion of issue knot-resolver-manager#38https://gitlab.nic.cz/knot/knot-resolver/-/issues/704Add tests for all quick start configuration snippets in kresd documentation2023-09-28T04:55:01+02:00Vaclav SraierAdd tests for all quick start configuration snippets in kresd documentationhttps://knot-resolver.readthedocs.io/en/stable/modules-policy.htmlhttps://knot-resolver.readthedocs.io/en/stable/modules-policy.htmlAleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/697Unnecessarily performed tasks of kresd instances2022-01-16T17:41:24+01:00Aleš MrázekUnnecessarily performed tasks of kresd instancesBy default some tasks are unnecessarily performed on all running kresd instances. This means tasks that only need to be performed once.
- **cache prefilling:** It only needs to be performed once. Prefilling can also take a relatively lo...By default some tasks are unnecessarily performed on all running kresd instances. This means tasks that only need to be performed once.
- **cache prefilling:** It only needs to be performed once. Prefilling can also take a relatively long time([#417](https://gitlab.nic.cz/knot/knot-resolver/-/issues/417)). Maybe it should be done by some separate process.
- **secret for TLS session resumption:** By default, each instance has/generates its own secret. Therefore, the clients session tickets for a particular kresd instance are not compatible with other instances. The secret should be the same for all instances and should change automatically at intervals. This ensures compatibility between kresd instances and increases security.https://gitlab.nic.cz/knot/knot-resolver/-/issues/691How to use static hints for local PTR records?2021-12-25T17:55:15+01:00Jon PolomHow to use static hints for local PTR records?Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps ...Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps I am misinterpreting what is possible with kresd so if that is the case, please clarify.https://gitlab.nic.cz/knot/knot-resolver/-/issues/686Please document SOA included in authority section for queries within local (a...2021-11-13T10:32:21+01:00Sergio CallegariPlease document SOA included in authority section for queries within local (and how to avoid it)As mentioned in https://forum.turris.cz/t/avahi-local-domain-warning-on-ubuntu/13437, knot resolver answers any queries within local by NXDOMAIN but it adds this SOA in the authority section:
```
$ dig local
;; WARNING: .local is reserv...As mentioned in https://forum.turris.cz/t/avahi-local-domain-warning-on-ubuntu/13437, knot resolver answers any queries within local by NXDOMAIN but it adds this SOA in the authority section:
```
$ dig local
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 56352
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;local. IN A
;; AUTHORITY SECTION:
local. 10800 IN SOA local. nobody.invalid. 1 3600 1200 604800 10800
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT “Blocking is mandated by standards, see references on https://www.iana.org/assignments/special-use-domain-names/special-use-domain-names.xhtml”
```
Unfortunately, this confuses `systemd-resolved` (maybe just older versions of it) and completely breaks mDNS name resolution on ubuntu focal (and possibly other distros).
What happens is as follows:
1. You do something like `ping foo.local`
2. Ubuntu focal has by default the host field in nsswitch.conf set to:
`hosts: files mdns4_minimal [NOTFOUND=return] dns`
so it tries the `/etc/hosts/` file and then mdns via the nss `mdns4_minimal` client
3. The `mdns4_minimal` client before doing anything else tries unicast DNS looking for a SOA for `local.` This mechanism is
present in the mdns4_minimal client to avoid issues when `local` in under DNS control and is documented at
https://github.com/lathiat/nss-mdns/blob/master/README.md
4. Ubuntu focal uses by default `systemd-resolved` as a caching DNS, so the query from `mdns4_minimal` gets to it
5. `systemd-resolved` passes the query to the DNS it is configured to use. If this is Knot resolver it gets that special SOA
in the authority section and turns it into a regular SOA reply (no NXDOMAIN)
6. `mdns4_minimal` receives a SOA reply for local and gives up
7. At this point DNS is queried. Back to `systemd-resolved` now trying to get the A field for `foo.local`.
8. By default `systemd-resolved` on ubuntu is configured not to do mDNS itself (even if it has this capability). Hence the
query at the previous point fails.
9. Rather than pinging foo.local you get an error.
I believe that:
- This is not a bug in knot resolver, rather a bug in `systemd-resolved` that makes itself confused by a legitimate answer
from knot resolver
- The issue in `systemd-resolved` may have been fixed in versions of systemd more recent than the one shipped in Ubuntu focal
(at least some quick testing on a rolling distro seems not to give the problem)
However, because:
1. Ubuntu Focal is extremely widespread
2. Ubuntu Focal is likely that it will not backport fixes to its `systemd-resolved` (because this is shipped in the `systemd` package that is quite delicate to touch)
3. The returning of the special SOA for things within `local` is something that older versions of knot resolver did not do
I believe that it could be worth adding an explicit note in the knot resolver documentation about the special SOA returned for queries within `local` and on how to avoid it in case it causes issues with mDNS name resolution.
I have observed that something like
```
policy.add(policy.suffix(policy.DROP, policy.todnames({'local.'})))
```
added to `kresd.conf` seems to be enough to workaround the problem, but I am not knowledgeable enough to know if this is the right solution.https://gitlab.nic.cz/knot/knot-resolver/-/issues/683performance problem because of shared cache2021-10-26T11:50:06+02:00Hamza Kılıçperformance problem because of shared cacheI am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting ...I am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting result.
Opening 8 process with shared cache at the same folder (/var/cache/knot-resolver) vs 8 process with different cache folders
results look like these values (approximately)
- each core (every 8 cores)
- % 60 - %99
- pps
- 20000- 30000
- elapsed miliseconds
- 500 - 300
- download Mbps
- 100 - 160
conclusion: using shared cache slows down performance dramatically.
Is there a way to fix this problem?https://gitlab.nic.cz/knot/knot-resolver/-/issues/675Build system: allow building with LeakSanitizer only2021-05-28T12:03:47+02:00Štěpán BalážikBuild system: allow building with LeakSanitizer onlyCurrently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot late...Currently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot lately 😏).https://gitlab.nic.cz/knot/knot-resolver/-/issues/672module dependencies2022-02-18T12:08:49+01:00Tomas Krizekmodule dependenciesOnce we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may...Once we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may depend on other modules, either as a hard or soft requirement
- modules may want to detect whether their requirements are met
- modules should be loaded automatically depending on the chosen configuration
- there should be no conflict between the desired configuration and the running configuration (i.e. when a module tries to auto-load another module which the user explicitly disabled)