Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2023-07-04T12:35:27+02:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/769failure to start the manager2023-07-04T12:35:27+02:00Vaclav Sraierfailure to start the managerHappens just about once in a while in our CI, nothing regular. Don't know how to reproduce. Rerunning the job always fixes the issue.
```
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 428ms:INFO:knot_resolver_man...Happens just about once in a while in our CI, nothing regular. Don't know how to reproduce. Rerunning the job always fixes the issue.
```
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 428ms:INFO:knot_resolver_manager.server:Loading initial configuration from /etc/knot-resolver/config.yml
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 437ms:INFO:knot_resolver_manager.server:Validating initial configuration...
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 439ms:WARNING:knot_resolver_manager.log:Changing logging level to 'INFO'
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 440ms:INFO:knot_resolver_manager.kresd_controller:Starting service manager auto-selection...
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 440ms:INFO:knot_resolver_manager.kresd_controller:Available subprocess controllers are ('supervisord',)
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 440ms:INFO:knot_resolver_manager.kresd_controller:Selected controller 'supervisord'
Oct 10 11:56:00 runner-114-project-147-concurrent-1-799966 env[5260]: 441ms:INFO:knot_resolver_manager.kresd_controller.supervisord:Supervisord is already running, we will just update its config...
Oct 10 11:56:05 runner-114-project-147-concurrent-1-799966 systemd[1]: knot-resolver.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 11:56:05 runner-114-project-147-concurrent-1-799966 systemd[1]: knot-resolver.service: Failed with result 'exit-code'.
Oct 10 11:56:05 runner-114-project-147-concurrent-1-799966 systemd[1]: Failed to start Knot Resolver Manager.
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/768Cipher order and other minor security issues2022-10-09T11:48:48+02:00Zdeněk ŠvarcCipher order and other minor security issuesI suggest to work on compliance with security testing [testssl.sh](https://testssl.sh/). Especially cipher order, although this is a common testing flaw.
DoT test results follow:
```
Testing protocols via sockets except NPN+ALPN
SSL...I suggest to work on compliance with security testing [testssl.sh](https://testssl.sh/). Especially cipher order, although this is a common testing flaw.
DoT test results follow:
```
Testing protocols via sockets except NPN+ALPN
SSLv2 not offered (OK)
SSLv3 not offered (OK)
TLS 1 not offered
TLS 1.1 not offered
TLS 1.2 offered (OK)
TLS 1.3 offered (OK): final
NPN/SPDY not offered
ALPN/HTTP2 not offered
Testing cipher categories
NULL ciphers (no encryption) not offered (OK)
Anonymous NULL Ciphers (no authentication) not offered (OK)
Export ciphers (w/o ADH+NULL) not offered (OK)
LOW: 64 Bit + DES, RC[2,4] (w/o export) not offered (OK)
Triple DES Ciphers / IDEA not offered
Obsolete CBC ciphers (AES, ARIA etc.) offered
Strong encryption (AEAD ciphers) offered (OK)
Testing robust (perfect) forward secrecy, (P)FS -- omitting Null Authentication/Encryption, 3DES, RC4
PFS is offered (OK) TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-ECDSA-CHACHA20-POLY1305
ECDHE-ECDSA-AES256-CCM TLS_AES_128_GCM_SHA256 TLS_AES_128_CCM_SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-SHA ECDHE-ECDSA-AES128-CCM
Elliptic curves offered: prime256v1 secp384r1 secp521r1 X25519 X448
Finite field group: ffdhe2048 ffdhe3072 ffdhe4096 ffdhe6144 ffdhe8192
Testing server preferences
Has server cipher order? no (NOT ok)
Negotiated protocol TLSv1.3
Negotiated cipher TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519) (limited sense as client will pick)
Negotiated cipher per proto (limited sense as client will pick)
ECDHE-ECDSA-AES256-GCM-SHA384: TLSv1.2
TLS_AES_256_GCM_SHA384: TLSv1.3
No further cipher order check has been done as order is determined by the client
Testing server defaults (Server Hello)
TLS extensions (standard) "EC point formats/#11" "session ticket/#35" "renegotiation info/#65281" "key share/#51" "supported versions/#43"
Session Ticket RFC 5077 hint 21600 seconds, session tickets keys seems to be rotated < daily
SSL Session ID support yes
Session Resumption Tickets: yes, ID: yes
TLS clock skew Random values, no fingerprinting possible
Signature Algorithm SHA256 with RSA
Server key size EC 384 bits
Server key usage Digital Signature
Server extended key usage TLS Web Server Authentication, TLS Web Client Authentication
Serial 047E...0D1D (OK: length 18)
Fingerprints SHA1 1A5...BAA
SHA256 53EB...DAB
Common Name (CN) --
subjectAltName (SAN) --
Issuer R3 (Let's Encrypt from US)
Trust (hostname) Ok via SAN (same w/o SNI)
Chain of trust Ok
EV cert (experimental) no
ETS/"eTLS", visibility info not present
Certificate Validity (UTC) 89 >= 30 days (2022-10-08 14:40 --> 2023-01-06 14:40)
# of certificates provided 3
Certificate Revocation List --
OCSP URI http://r3.o.lencr.org
OCSP stapling not offered
OCSP must staple extension --
DNS CAA RR (experimental) not offered
Certificate Transparency yes (certificate extension)
Testing vulnerabilities
Heartbleed (CVE-2014-0160) not vulnerable (OK), no heartbeat extension
CCS (CVE-2014-0224) not vulnerable (OK)
Ticketbleed (CVE-2016-9244), experiment. -- (applicable only for HTTPS)
ROBOT Server does not support any cipher suites that use RSA key transport
Secure Renegotiation (RFC 5746) supported (OK)
Secure Client-Initiated Renegotiation VULNERABLE (NOT ok), potential DoS threat
CRIME, TLS (CVE-2012-4929) not vulnerable (OK) (not using HTTP anyway)
POODLE, SSL (CVE-2014-3566) not vulnerable (OK), no SSLv3 support
TLS_FALLBACK_SCSV (RFC 7507) No fallback possible (OK), no protocol below TLS 1.2 offered
SWEET32 (CVE-2016-2183, CVE-2016-6329) not vulnerable (OK)
FREAK (CVE-2015-0204) not vulnerable (OK)
DROWN (CVE-2016-0800, CVE-2016-0703) not vulnerable on this host and port (OK)
no RSA certificate, thus certificate can't be used with SSLv2 elsewhere
LOGJAM (CVE-2015-4000), experimental not vulnerable (OK): no DH EXPORT ciphers, no DH key detected with <= TLS 1.2
BEAST (CVE-2011-3389) not vulnerable (OK), no SSL3 or TLS1
LUCKY13 (CVE-2013-0169), experimental potentially VULNERABLE, uses cipher block chaining (CBC) ciphers with TLS. Check patches
RC4 (CVE-2013-2566, CVE-2015-2808) no RC4 ciphers detected (OK)
Testing 370 ciphers via OpenSSL plus sockets against the server, ordered by encryption strength
Hexcode Cipher Suite Name (OpenSSL) KeyExch. Encryption Bits Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
x1302 TLS_AES_256_GCM_SHA384 ECDH 253 AESGCM 256 TLS_AES_256_GCM_SHA384
x1303 TLS_CHACHA20_POLY1305_SHA256 ECDH 253 ChaCha20 256 TLS_CHACHA20_POLY1305_SHA256
xc02c ECDHE-ECDSA-AES256-GCM-SHA384 ECDH 253 AESGCM 256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
xc00a ECDHE-ECDSA-AES256-SHA ECDH 253 AES 256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
xcca9 ECDHE-ECDSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
xc0ad ECDHE-ECDSA-AES256-CCM ECDH 253 AESCCM 256 TLS_ECDHE_ECDSA_WITH_AES_256_CCM
x1301 TLS_AES_128_GCM_SHA256 ECDH 253 AESGCM 128 TLS_AES_128_GCM_SHA256
x1304 TLS_AES_128_CCM_SHA256 ECDH 253 AESCCM 128 TLS_AES_128_CCM_SHA256
xc02b ECDHE-ECDSA-AES128-GCM-SHA256 ECDH 253 AESGCM 128 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
xc009 ECDHE-ECDSA-AES128-SHA ECDH 253 AES 128 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
xc0ac ECDHE-ECDSA-AES128-CCM ECDH 253 AESCCM 128 TLS_ECDHE_ECDSA_WITH_AES_128_CCM
Could not determine the protocol, only simulating generic clients.
Running client simulations via sockets
Android 8.1 (native) TLSv1.2 ECDHE-ECDSA-AES128-GCM-SHA256, 253 bit ECDH (X25519)
Android 9.0 (native) No connection
Android 10.0 (native) No connection
Android 11 (native) No connection
Android 12 (native) No connection
Java 7u25 No connection
Java 8u161 TLSv1.2 ECDHE-ECDSA-AES256-SHA, 256 bit ECDH (P-256)
Java 11.0.2 (OpenJDK) TLSv1.3 TLS_AES_128_GCM_SHA256, 256 bit ECDH (P-256)
Java 17.0.3 (OpenJDK) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
go 1.17.8 No connection
LibreSSL 2.8.3 (Apple) TLSv1.2 ECDHE-ECDSA-CHACHA20-POLY1305, 253 bit ECDH (X25519)
OpenSSL 1.0.2e TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384, 256 bit ECDH (P-256)
OpenSSL 1.1.0l (Debian) TLSv1.2 ECDHE-ECDSA-AES256-GCM-SHA384, 253 bit ECDH (X25519)
OpenSSL 1.1.1d (Debian) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
OpenSSL 3.0.3 (git) TLSv1.3 TLS_AES_256_GCM_SHA384, 253 bit ECDH (X25519)
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/767kresd always returning SERVFAIL2022-09-26T16:35:45+02:00Sergio Callegarikresd always returning SERVFAILHi, I am recently experiencing a complete breakage of my instance of the knot resolver daemon after it has worked perfectly for a long time.
It is unclear to me if the issues are related to the latest update to the 5.5.3 release or to s...Hi, I am recently experiencing a complete breakage of my instance of the knot resolver daemon after it has worked perfectly for a long time.
It is unclear to me if the issues are related to the latest update to the 5.5.3 release or to some other change in my networking environment.
I have the knot resolver daemon working on an ARM64 system with the armbian OS.
The knot resolver binary is from http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_11/
In the latest days, kresd cannot start properly, so that when I query `systemctl` for the status of `kresd@0`, I get
```
Sep 25 19:54:12 xxx kresd[1850]: [taupd ] active refresh failed for . with rcode: 2
Sep 25 19:54:12 xxx kresd[1850]: [timesk] cannot resolve '.' NS
```
If I enable the debug, I get flooded with messages. For the most part they look like repetitions of sequences similar to
```
Sep 25 20:09:11 xxx kresd[2256]: [select][65538.02] => id: '58484' choosing from addresses: 13 v4 + 13 v6; names to resolve: 0 v4 + 0 v6; force_resolve: 0; NO6: IPv6 is OK
Sep 25 20:09:11 xxx kresd[2256]: [select][65538.02] => id: '58484' choosing: 'K.ROOT-SERVERS.NET.'@'193.0.14.129#00053' with timeout 25 ms zone cut: '.'
Sep 25 20:09:11 xxx kresd[2256]: [select][65538.02] => id: '59806' noting selection error: 'D.ROOT-SERVERS.NET.'@'199.7.91.13#00053' zone cut: '.' error: 6 SERVFAIL
Sep 25 20:09:11 xxx kresd[2256]: [iterat][65538.02] <= rcode: SERVFAIL
```
...until I get to
```
Sep 25 20:09:11 xxx kresd[2256]: [resolv][65538.00] => too many failures in a row, bail out (mitigation for NXNSAttack CVE-2020-12667)
```
Changes that I have recently experienced in my setup include:
- Update to the knot resolver release 5.5.3. Not easy to test downgrading as I cannot found previous releases on http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_11/
- Update of the ARM machine to kernel 5.19.10-rockchip64
- Update of my ISP from Wind 3 (Italy) to Vodafone (Italy), both on fiber.
It looks like there are no major networking problems. The machine running kresd can ping outside and resolve via kdig using public nameservers such as quad9 or google. For sure the new vodafone ISP is nasty. Does not let you set the DNS on its router, nor publish a different NS via the DHCP server on its router, nor select a ssid without the word "vodafone" in it, but it would appear strange to me if it ended up mangling trafic to the point of blocking a private caching nameserver from operating.
Any clue?https://gitlab.nic.cz/knot/knot-resolver/-/issues/766manager: datamodel: make sure JSON Schema is valid2022-10-09T13:35:39+02:00Aleš Mrázekmanager: datamodel: make sure JSON Schema is validhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/764SERVFAIL for www.pinterest.com and TLS_FORWARD (kresd 5.5.2)2022-10-11T15:54:10+02:00Markus Donko-HuberSERVFAIL for www.pinterest.com and TLS_FORWARD (kresd 5.5.2)Hi knot-resolver maintenance team,
I spend some time to debug an issue to resolve a specific FQDN: **`www.pinterest.com`**
After debugging, I found that the **SERVFAIL** error only occurs in the CNAME CHAIN once I configure a TLS_FORWAR...Hi knot-resolver maintenance team,
I spend some time to debug an issue to resolve a specific FQDN: **`www.pinterest.com`**
After debugging, I found that the **SERVFAIL** error only occurs in the CNAME CHAIN once I configure a TLS_FORWARD example.
Steps to re-produce the issue:
- use the latest knot-resolver version (5.5.2), e.g. from **docker cznic/knot-resolver**
- Forward all requests to Cloudflare Upstream: `policy.add(policy.all(policy.TLS_FORWARD({{'1.1.1.1', hostname='cloudflare-dns.com'}})))`
- Attempts to resolve `www.pinterest` result in a `SERVFAIL`error
```
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 24362
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.pinterest.com. IN A
;; ANSWER SECTION:
www.pinterest.com. 439 IN CNAME www-pinterest-com.gslb.pinterest.com.
www-pinterest-com.gslb.pinterest.com. 159 IN CNAME www.gslb.pinterest.net.
;; Query time: 919 msec
;; SERVER: 192.168.10.240#53(192.168.10.240) (UDP)
;; WHEN: Mon Sep 12 13:35:07 CEST 2022
;; MSG SIZE rcvd: 119
```
It seems that the request fails because of DNSSEC and `pinterest.net` in the cname chain. Interesting enough, once the **TLS_FORWARD** policy has been removed, **www.pinterest.com** resolves as expected.
I have too little knowledge to understand why the request fails in combination with **TLS_FORWARD**.
I am happy to contribute with additional debug information.https://gitlab.nic.cz/knot/knot-resolver/-/issues/763I cannot start kresd with the data.mdb that I generate through lua2022-09-01T11:17:12+02:00makeI cannot start kresd with the data.mdb that I generate through luaI generate a data.mdb and set the cache maximum size 1GB, it works well. But After I increase it to 50GB and generate a new data.mdb, there are some problems for me to start it.
![image](/uploads/446655285edea1962b270cb9d7fa1fc3/image.p...I generate a data.mdb and set the cache maximum size 1GB, it works well. But After I increase it to 50GB and generate a new data.mdb, there are some problems for me to start it.
![image](/uploads/446655285edea1962b270cb9d7fa1fc3/image.png)
And before I run kresd,
![捕获](/uploads/c2db07acd5f60eef81fc7f54ff0d1b22/捕获.PNG)
But after I run kresd,
![image](/uploads/1de5d306ac89983f96776d544f948884/image.png)https://gitlab.nic.cz/knot/knot-resolver/-/issues/761logging: consider adding startup and shutdown messages2022-10-10T11:45:32+02:00Matt Taggartlogging: consider adding startup and shutdown messagesI thought I was having a problem with my kresd.log as it wasn't getting updated. Then I realized mostly only errors get logged there and nothing is printed there on service start or stop. If I did a known bad query I could cause an updat...I thought I was having a problem with my kresd.log as it wasn't getting updated. Then I realized mostly only errors get logged there and nothing is printed there on service start or stop. If I did a known bad query I could cause an update there.
Please consider log entries on start/stop. I note that kres-cache-gc already does this on startup:
```kres-cache-gc[18916]: Knot Resolver Cache Garbage Collector, version 5.5.2```
so maybe something similar for kresd, and shutting down messages for both.
Thankshttps://gitlab.nic.cz/knot/knot-resolver/-/issues/760daf: rewrite not working in 5.5.12022-09-21T17:07:23+02:00Michael Peleshenkodaf: rewrite not working in 5.5.1My daf config stopped working after upgrading from 5.4.3 to 5.5.1. It seems related to the recent changes to the renumber module.
**Config**
```
daf.add('src = 192.168.0.0/24 rewrite host.domain.local. A 192.168.0.1')
```
After adding ...My daf config stopped working after upgrading from 5.4.3 to 5.5.1. It seems related to the recent changes to the renumber module.
**Config**
```
daf.add('src = 192.168.0.0/24 rewrite host.domain.local. A 192.168.0.1')
```
After adding the below debug config, I noticed an error related to the renumber module.
**Debug Config**
```
policy.add(policy.suffix(policy.DEBUG_ALWAYS, policy.todnames({'host.domain.local'})))
```
**Logs**
```
[system] error: /usr/lib/knot-resolver/kres_modules/renumber.lua:33: attempt to compare number with nil
```
After reverting to the 5.4.3 version of renumber.lua, daf rewrite works again, so the recent renumber changes seem to have broken this in 5.5.1.https://gitlab.nic.cz/knot/knot-resolver/-/issues/759manager API: versioning2022-10-10T20:58:01+02:00Vaclav Sraiermanager API: versioning
I think it's quite unlikely that manager's HTTP API will have flawless design from the start. At some point in the future, there might be a need to make a new backwards incompatible API. I think we should prepare for it now as it's stil...
I think it's quite unlikely that manager's HTTP API will have flawless design from the start. At some point in the future, there might be a need to make a new backwards incompatible API. I think we should prepare for it now as it's still not too late and it's really easy to do so now.
# Ideas
- version in URL path (`/config/v2/...`)
- version in the configuration (that would allow us to change the schema, but not the API itself)
- version in HTTP header (probably something like `X-API-Version: latest` or `X-API-Version: 6.0.0`)
- ...Vaclav SraierVaclav Sraierhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/758Timeout in config.ta_bootstrap test on Meson 0.57.0 onwards2022-07-13T11:06:13+02:00Héctor Molinero FernándezTimeout in config.ta_bootstrap test on Meson 0.57.0 onwardsIt seems that on Meson 0.57.0 onwards the config.ta_bootstrap test fails with a timeout because the webserv.lua process does not terminate. If stdout and stderr are redirected to `/dev/null` this problem does not occur.
I think this is ...It seems that on Meson 0.57.0 onwards the config.ta_bootstrap test fails with a timeout because the webserv.lua process does not terminate. If stdout and stderr are redirected to `/dev/null` this problem does not occur.
I think this is a regression in Meson, but since I doubt there will be a backport for the version of Meson included in the current distros and the workaround is simple, it is worth fixing it here.
I encountered this problem when trying to build Knot Resolver on Ubuntu 22.04.https://gitlab.nic.cz/knot/knot-resolver/-/issues/754manager: datamodel: location for default values and constants2022-07-04T17:51:07+02:00Aleš Mrázekmanager: datamodel: location for default values and constantsWe should agree on location and definition of default values and constants. Some are currently defined in the configuration schema and some outside if it.
Issue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requ...We should agree on location and definition of default values and constants. Some are currently defined in the configuration schema and some outside if it.
Issue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1280#note_256358) in !1280.https://gitlab.nic.cz/knot/knot-resolver/-/issues/753manager: datamodel: max_workers configuration2022-08-05T16:40:29+02:00Aleš Mrázekmanager: datamodel: max_workers configurationIssue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1280#note_256315) in !1280.
Basically, we're not sure if it's a good idea to allow `max-workers` to be configured, capping the max number of workers. ...Issue follows the [comment](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1280#note_256315) in !1280.
Basically, we're not sure if it's a good idea to allow `max-workers` to be configured, capping the max number of workers. It could confuse users with `workers` option that determines the requred number of workers.
Currently, the maximum number of workers is also capped during [validation](https://gitlab.nic.cz/knot/knot-resolver/-/blob/manager/manager/knot_resolver_manager/datamodel/config_schema.py#L183) by 10 workers per CPU.Vaclav SraierVaclav Sraierhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/752Protocol layers2022-07-18T10:01:11+02:00Oto ŠťávaProtocol layersSee snippet $1448See snippet $1448https://gitlab.nic.cz/knot/knot-resolver/-/issues/751manager: declarative configuration examples2023-10-16T11:50:49+02:00Aleš Mrázekmanager: declarative configuration examples# Configuration examples
A current detailed configuration datamodel can be seen [here](https://gitlab.nic.cz/knot/knot-resolver/-/tree/manager/manager/knot_resolver_manager/datamodel).
## Minimal config
The minimal configuration to start...# Configuration examples
A current detailed configuration datamodel can be seen [here](https://gitlab.nic.cz/knot/knot-resolver/-/tree/manager/manager/knot_resolver_manager/datamodel).
## Minimal config
The minimal configuration to start the manager.
```yaml
id: dev # identifier of the manager instance
```
## Complete config without policy rules
```yaml
id: dev
hostname: &name manager-dev
nsid: *name
rundir: etc/knot-resolver/runtime
workers: 1
management:
interface: 127.0.0.1@5000 # or unix-socket: '/path/to/unix-socket'
webmgmt:
interface: 127.0.0.1@5000
tls: true
cert-file: /path/to/file.cert
key-file: /path/to/file.key
supervisor:
backend: systemd-session
watchdog:
qname: nic.cz.
qtype: AAAA
options:
glue-checking: normal # strict, permissive
qname-minimisation: true
query-loopback: false
reorder-rrset: true
query-case-randomization: false
priming: true
rebinding-protection: false
refuse-no-rd: true
time-jump-detection: true
violators-workarounds: false
serve-stale: false
prediction: # can be also set to 'false' or 'true'
window: 15m
period: 24
network:
listen:
- interface: 127.0.0.1@5353 # or unix-socket: /path/to/socket
kind: dns # xdp, dot, doh-legacy, doh2
freebind: false
do-ipv4: true
do-ipv6: true
tcp-pipeline: 100
edns-tcp-keepalive: true
edns-buffer-size:
upstream: 1232B
downstream: 1232B
address-renumbering:
- source: 10.10.10.0/24
destination: 192.168.1.0
tls:
cert-file: /path/to/file.cert
key-file: /path/to/file.key
sticket-secret: some-secret # or sticket-secret-file: /path/to/secret
auto-discovery: false
padding: true # or int value 0-512
proxy-protocol:
allow: [172.22.0.1, 172.18.1.0/24]
static-hints:
ttl: 1d
nodata: true
etc-hosts: true
root-hints:
j.root-servers.net.: [2001:503:c27::2:30, 192.58.128.30]
root-hints-file: /path/to/root.hints
hints:
foo.bar: [127.0.0.1]
hints-files: [/path/to/custom.hints]
# policy rules examples will be separate
# views, slices, policy, rpz, stub-zones, forward-zones
cache:
garbage-collector: true
storage: /var/cache/knot-resolver
size-max: 100M
ttl-min: 5s
ttl-max: 6d
ns-timeout: 1000ms
prefill:
- origin: '.'
url: https://www.internic.net/domain/root.zone
refresh-interval: 1d
ca-file: /etc/pki/tls/certs/ca-bundle.crt
dnssec: # can be set to 'false' or 'true'
trust-anchor-sentinel: true
trust-anchor-signal-query: true
time-skew-detection: true
keep-removed: 0
refresh-time: 10s
hold-down-time: 30d
trust-anchors:
- . 3600 IN DS 19036 8 2 49AAC11...
negative-trust-anchors: [bad.boy, example.com]
trust-anchors-files:
- file: root.key
read-only: false
dns64: # can be set to 'false' or 'true'
prefix: 64:ff9b::/96
logging:
level: notice # crit, err, warning, notice, info, debug
target: syslog # stderr, stdout
groups: [manager, cache]
dnssec-bogus: false
dnstap: # can be set to 'false'
unix-socket: /tmp/dnstap.sock
log-queries: true
log-responses: true
log-tcp-rtt: true
debugging:
assertion-abort: false
assertion-fork: 5m
monitoring:
enabled: lazy # manager-only, always
graphite:
prefix: *name
host: 127.0.0.1 # or domain-name
port: 2003
interval: 5s
tcp: false
lua:
script-only: false # if 'true', no declarative config is used, just lua script
script: | # or script-file: '/path/to/lua/script.lua'
-- this is lua script
```
## Policy rules and config
These are only examples, there is no guarantee that they will work together in single configuration.
```yaml
# Definition of views
# https://knot-resolver.readthedocs.io/en/stable/modules-view.html?highlight=views#views-and-acls
views:
view-1:
subnets: [127.0.0.1, '::']
options: [no-minimize]
view-2:
tsig: [\5mykey]
slices:
# Forwarding to multiple targets
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=slices#forwarding-to-multiple-targets
- function: randomize-psl
actions:
- action: forward
servers:
- address: 192.0.2.1
hostname: res.example.com
- action: forward
servers:
- address: 193.17.47.1
hostname: odvr.nic.cz
- address: 185.43.135.1
hostname: odvr.nic.cz
# RPZ blocklist
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=rpz#policy.rpz
rpz:
- action: deny
file: /etc/knot-resolver/blocklist.rpz
watch: true
message: domain blocked by your resolver operator
# Policy rules examples
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html
policy:
# Mirror query trafic
- action: mirror
servers: [127.0.0.2]
# Whitelist 'good.example.com'
- action: pass
filter:
pattern: good.example.com.
# Deny query based on suffix filter for 'view-1' and 'view-2'
- action: deny
filter:
suffix: example.net
views: [view-1, view-2]
# Change IPv4 address and TTL for example.com
- action: answer
filter:
domain: example.com
answer:
rtype: A
rdata: 192.0.2.7
ttl: 300s
# Stub zones
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=stub#policy.STUB
stub-zones:
- name: 1.168.192.in-addr.arpa
servers: [192.0.2.1@5353]
# internal-only domain
# https://knot-resolver.readthedocs.io/en/stable/quickstart-config.html?highlight=local%20domains#internal-only-domains
- name: company.example
servers: [192.0.2.44]
options: [no-cache]
# Forwarding
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=stub#forwarding
forward-zones:
# Forward all queries to public resolvers https://www.nic.cz/odvr
- name: '.'
servers: [2001:148f:fffe::1, 2001:148f:ffff::1, 185.43.135.1, 193.14.47.1]
# TLS forward, server authenticated using hostname and system-wide CA certificates
# https://knot-resolver.readthedocs.io/en/stable/modules-policy.html?highlight=forward#tls-examples
- name: '.'
tls: true
servers:
- address: 192.0.2.1
pin-sha256: Wg==
- address: 2001:DB8::d0c
hostname: res.example.com
ca-file: /etc/knot-resolver/tlsca.crt
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/747Expired gpg key in OBS2022-09-03T18:37:20+02:00Vladimír Čunátvladimir.cunat@nic.czExpired gpg key in OBS.deb users of our [upstream repo](https://www.knot-resolver.cz/download/) can't update anymore (Debian, Ubuntu).
Message examples:
```
# apt update
[...]
W: GPG error: http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolve....deb users of our [upstream repo](https://www.knot-resolver.cz/download/) can't update anymore (Debian, Ubuntu).
Message examples:
```
# apt update
[...]
W: GPG error: http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_11 InRelease: The following signatures were invalid: EXPKEYSIG 74062DB36A1F4009 home:CZ-NIC OBS Project <home:CZ-NIC@build.opensuse.org>
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
```
The key:
```
pub rsa2048 2018-02-15 [SC] [expired: 2022-06-21]
45737F9C8BC3F3ED2791818274062DB36A1F4009
uid [ expired] home:CZ-NIC OBS Project <home:CZ-NIC@build.opensuse.org>
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/746daemon/http: returning status 400 to handshake with dnscrypt-proxy2022-06-23T09:39:55+02:00Oto Šťávadaemon/http: returning status 400 to handshake with dnscrypt-proxyWhen [`dnscrypt-proxy`](https://github.com/DNSCrypt/dnscrypt-proxy) attempts a handshake with `kresd`, status code 400 is returned.
On Gitter, user `jlongua` reported getting this log message:
```
Jun 16 13:41:55 draco.plan9-ns2.com dn...When [`dnscrypt-proxy`](https://github.com/DNSCrypt/dnscrypt-proxy) attempts a handshake with `kresd`, status code 400 is returned.
On Gitter, user `jlongua` reported getting this log message:
```
Jun 16 13:41:55 draco.plan9-ns2.com dnscrypt-proxy[5775]: [2022-06-16 13:41:55] [ERROR] Webserver returned code 400
```
When I try it locally with a simple Docker image of dnscrypt-proxy, I get this:
```
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] dnscrypt-proxy 2.1.1
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Network connectivity detected
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Now listening to 0.0.0.0:53 [UDP]
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Now listening to 0.0.0.0:53 [TCP]
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Source [relays] loaded
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Source [public-resolvers] loaded
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] Firefox workaround initialized
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [ERROR] 400 Bad Request
dnscrypt-proxy-dnsdist-1 | [2022-06-17 06:58:33] [NOTICE] dnscrypt-proxy is waiting for at least one server to be reachable
```Oto ŠťávaOto Šťávahttps://gitlab.nic.cz/knot/knot-resolver/-/issues/745While communication with daemon: File exists2022-06-13T11:17:01+02:00Radek KrejčaWhile communication with daemon: File existsHello,
I have 3 the same virtualized knot resolver instanaces on Debian 11.3 - Knot Resolver, version 5.5.0
When I want to clear cache with kresc, two devices works perfectly, but on one I got this:
kresc /run/knot-resolver/control/1
W...Hello,
I have 3 the same virtualized knot resolver instanaces on Debian 11.3 - Knot Resolver, version 5.5.0
When I want to clear cache with kresc, two devices works perfectly, but on one I got this:
kresc /run/knot-resolver/control/1
Warning! kresc is highly experimental, use at own risk.
Please tell authors what features you expect from client utility.
kresc> cache.clear()
While communication with daemon: File exists
And kresc dies. Cache is not cleared. There is enough RAM, free about 30GB.
I tried to restart server, but not helped..
Thank you very much.
Radek Krejčahttps://gitlab.nic.cz/knot/knot-resolver/-/issues/744tests/packaging: failing tests2022-06-01T14:06:50+02:00Oto Šťávatests/packaging: failing testsI'm opening this issue so that we are tracking these test fails somewhere, but I'm not sure what we can do about them.
* `centos_7`
* outdated `luarocks` (is `2.x` required `3.x`) - cannot install `process`. I've tried to resolve th...I'm opening this issue so that we are tracking these test fails somewhere, but I'm not sure what we can do about them.
* `centos_7`
* outdated `luarocks` (is `2.x` required `3.x`) - cannot install `process`. I've tried to resolve this by explicitly installing older `process`, that does not require the new `luarocks`, but it attempts to install the new version anyway
* `centos_8`
* appstream fails to prepare internal mirrorlist
* no such command `config-manager`
* `fedora_31`
* outdated `knot` - is `3.0.1`, required `3.0.2`
* `leap_15.2`
* package conflicts
Related MR: !1304 (adds better logging for failing commands)https://gitlab.nic.cz/knot/knot-resolver/-/issues/742TLS: use GNUTLS_NO_TICKETS_TLS122022-05-20T09:39:49+02:00Vladimír Čunátvladimir.cunat@nic.czTLS: use GNUTLS_NO_TICKETS_TLS12It's a new [feature](https://gitlab.com/gnutls/gnutls/-/merge_requests/1475) that will be part of gnutls > 3.7.4. With TLS 1.2, session resumption weakens privacy guarantees too much ([explanation](https://gitlab.com/gnutls/gnutls/-/mer...It's a new [feature](https://gitlab.com/gnutls/gnutls/-/merge_requests/1475) that will be part of gnutls > 3.7.4. With TLS 1.2, session resumption weakens privacy guarantees too much ([explanation](https://gitlab.com/gnutls/gnutls/-/merge_requests/1475)), so it's better avoided – at least by default.https://gitlab.nic.cz/knot/knot-resolver/-/issues/741assertion "session_flags(session)->outgoing && !session_flags(session)->closi...2022-05-09T19:06:02+02:00megousassertion "session_flags(session)->outgoing && !session_flags(session)->closing" failedI use kresd on my home router, and last night it stopped processing queries (seemingly without crashing outright). The log was full of (thousands of repetitions):
```
....
Apr 30 06:39:36 router kresd[367]: [system] assertion "session_f...I use kresd on my home router, and last night it stopped processing queries (seemingly without crashing outright). The log was full of (thousands of repetitions):
```
....
Apr 30 06:39:36 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
Apr 30 06:39:36 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
Apr 30 06:39:36 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
Apr 30 06:39:36 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
Apr 30 06:39:37 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
Apr 30 06:39:37 router kresd[367]: [system] assertion "session_flags(session)->outgoing && !session_flags(session)->closing" failed in tcp_task_waiting_connection@../daemon/worker.c:1447
....
```
During kresd restart, it crashed on shutdown:
```
Apr 30 06:39:40 router systemd-coredump[3711]: [LNK] Process 367 (kresd) of user 972 dumped core.
Module linux-vdso.so.1 with build-id c84a1af85cfb395c374cd5f645723e53f7f8d62b
Module p11-kit-trust.so with build-id 84da804340e6a810123f87b2b4a9c4bd4d0e8cf0
Module stats.so with build-id 460aaa6ef03adef5a85ed1f2bd000be5363ac1a8
Module hints.so with build-id 90c38cd4a6b5f6a17b1bf77dbbc81683631187ed
Module extended_error.so with build-id c9984a96311c272732feff7a5528d957c8d8b4b1
Module refuse_nord.so with build-id e66c72ba75b14470f13110045595ab1ad3fb533b
Module edns_keepalive.so with build-id 3f3662709a965611174c37e672b7bce66ac98658
Module libstdc++.so.6 with build-id 0efbe365b709015ea481a66fb0f5ad650e617599
Module libgpg-error.so.0 with build-id 1e65d609a859c3c4ba69fe248838202cf00c8bbb
Module libbrotlicommon.so.1 with build-id 3dc157d6417d3602b6d774ae07508e4bbfa8920c
Module libffi.so.8 with build-id 5103e7b5b7addb8026a35a62734fefd1c7ef5c64
Module libelf.so.1 with build-id 7047fb71440373a1456396c581692cda24627825
Module libgcrypt.so.20 with build-id b10fee43a15f81876aeadec4e734decfc4214e4e
Module libcap.so.2 with build-id ba39fbcf17238edd9188c42c664778b3da8d8975
Module liblz4.so.1 with build-id 6d85cb32490fa810dbc0b9cbb0043fc52e6ddba0
Module libzstd.so.1 with build-id df4d0e928163f0b5e1c7c5f78ddb055cbe22b639
Module liblzma.so.5 with build-id d34507011f065d2da4c4cc360615b2cd3ce3d4b2
Module libgmp.so.10 with build-id ede351880698ee91c5e8d457bf078a8887ecc97a
Module libhogweed.so.6 with build-id 2b084732112218e0af7d9b77153758b092cfa54f
Module libnettle.so.8 with build-id c376ee33b84aebefdf23b0dee1f22c8e79f1fd0e
Module libtasn1.so.6 with build-id e64114db392bb17238bd5cb22dfd12e308db52b0
Module libunistring.so.2 with build-id 457d1352b4d0b8d2eaad4b0c9ccea31446a11395
Module libidn2.so.0 with build-id be16fc6cb7814edc928c646a2f11ddfcc0ec1822
Module libbrotlidec.so.1 with build-id a634700f82bb52f4fa5e4a9495b39b890a1b26e6
Module libbrotlienc.so.1 with build-id b20212ed7f9630b545fb132a93579aef1967f308
Module libp11-kit.so.0 with build-id 5c3eefdf311483790b33a8f76dc45a87f6769ecf
Module libz.so.1 with build-id 961b20a79348f990621bd0a145f15c51219eef5d
Module libpthread.so.0 with build-id 2d7e5623023dc082483554f4447388c3a48a244b
Module libdl.so.2 with build-id 3d5771318379b07f0a5dda7613f76422aa7f6022
Module libbpf.so.0 with build-id 6313987843e278092e5f9375e0215c552337c896
Module libm.so.6 with build-id be9757a4dc0f0a727982d77fca226e6e852aa3bb
Module liblmdb.so with build-id ac3b357165ae5eb6c17cdc9de3adf3c5b9f5b3e6
Module libc.so.6 with build-id 2858f54ba7c8eae476c62b8631c4feded56e9064
Module libgcc_s.so.1 with build-id 43de5fed20f08220e018b86c70e0e46e00a46de2
Module libnghttp2.so.14 with build-id dd24ff864cabdc1181dd940f264451de6dd04ece
Module libcap-ng.so.0 with build-id d02eff3ece50ff505401a5ff91046d6cbf499dcb
Module libsystemd.so.0 with build-id 63e76b23478874cf91e5d81741285d93ccbf27cb
Module libgnutls.so.30 with build-id 2fc3c60ebe9b399e5ac84e4496bb75f35443f89e
Module libluajit-5.1.so.2 with build-id c7b4394fcbb3e55dd9dde4164c59020aa962ab33
Module libuv.so.1 with build-id 5786228ca54387aeb7ebb38960f8a75305ee5223
Module libdnssec.so.8 with build-id 4fc7ee9ab8130753ba22e7179cb74357678f8651
Module libzscanner.so.4 with build-id 2a53ca5ee610b0674aeabce12f38a2187703a5d1
Module libknot.so.12 with build-id bba838634737e8b916f4f2067f61f8f29a13c49c
Module libkres.so.9 with build-id 563367b5523d95acc1a70849a58e0a16cb923a3e
Module kresd with build-id 4c099ec64de5aeeb7ef45a5024654b7b042756f4
Stack trace of thread 367:
#0 0x0000ffff9185ac38 n/a (libkres.so.9 + 0x1ac38)
#1 0x0000ffff9185ac40 n/a (libkres.so.9 + 0x1ac40)
#2 0x0000ffff9185b1bc map_clear (libkres.so.9 + 0x1b1bc)
#3 0x0000aaaac39ab2dc n/a (kresd + 0x2b2dc)
#4 0x0000aaaac398aa14 n/a (kresd + 0xaa14)
#5 0x0000ffff9108b8fc __libc_start_call_main (libc.so.6 + 0x2b8fc)
#6 0x0000ffff9108b9d4 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2b9d4)
#7 0x0000aaaac398c4f0 _start (kresd + 0xc4f0)
ELF object binary architecture: AARCH64
```
(I don't have debug symbols, sorry)
```
# kresd -V status=1
Knot Resolver, version 5.5.0
[2022-03-29T19:29:11+0200] [ALPM] upgraded knot-resolver (5.4.4-1 -> 5.5.0-1)
```
This seems like an exceedingly rare event (one crash in a month since update)
My configuration:
```
net.listen({'127.0.0.1', '192.168.1.1', '10.11.7.1', '10.11.4.1'})
net.listen({'[...redactecd public ipv6 address...]'})
net.outgoing_v6('[...redactecd public ipv6 address...]')
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
cache.size = cache.fssize() - 10 * MB
log_level('warning')
--log_level('debug')
home_names = policy.todnames({'[... my public tld that I serve via a local secondary NS in case my internet crashes ...].'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE', 'NO_EDNS'}), home_names))
policy.add(policy.suffix(policy.STUB('127.0.0.2'), home_names))
policy.add(policy.slice(
policy.slice_randomize_psl(),
policy.TLS_FORWARD({
{'1.1.1.1', hostname='1dot1dot1dot1.cloudflare-dns.com'},
{'1.0.0.1', hostname='1dot1dot1dot1.cloudflare-dns.com'},
{'2606:4700:4700::1111', hostname='1dot1dot1dot1.cloudflare-dns.com'},
{'2606:4700:4700::1001', hostname='1dot1dot1dot1.cloudflare-dns.com'},
}),
policy.TLS_FORWARD({
{'8.8.8.8', hostname='dns.google'},
{'8.8.4.4', hostname='dns.google'},
{'2001:4860:4860::8888', hostname='dns.google'},
{'2001:4860:4860::8844', hostname='dns.google'},
}),
policy.TLS_FORWARD({
{'9.9.9.9', hostname='dns9.quad9.net'},
{'149.112.112.112', hostname='dns9.quad9.net'},
{'2620:fe::fe', hostname='dns9.quad9.net'},
{'2620:fe::9', hostname='dns9.quad9.net'},
})
))
```
About a week before the crash I added TLS_FORWARD policies. Before that I used recursive resolution in kresd. So it might be some issue with the forwarding of queries to those upstream resolvers.
Resolution was not failing completely. I noticed the crash, because HSTS using web pages were returning TLS certificate errors due to addresses being resolved to the IP address of my home router (which is also running something on HTTPS port). That was quite weird.