Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2020-08-27T14:07:31+02:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/465DoH: error: attempt to call a userdata value stack traceback2020-08-27T14:07:31+02:00Foundation for Applied PrivacyDoH: error: attempt to call a userdata value stack tracebackThanks for adding DoH support in your latest release.
We started the process to migrate our current public DoH server to knot-resolver.
Our knot-resolver test setup works so far but we see the following entries in our logs when pointing ...Thanks for adding DoH support in your latest release.
We started the process to migrate our current public DoH server to knot-resolver.
Our knot-resolver test setup works so far but we see the following entries in our logs when pointing (a single) firefox (in GET mode) to the endpoint (which is behind nginx):
```
[worker.background] error: attempt to call a userdata value stack traceback:
#011[C]: in function 'wait'
#011/usr/share/lua/5.1/cqueues/condition.lua:13: in function 'wait'
#011/usr/lib/knot-resolver/kres_modules/http_doh.lua:102: in function 'data'
#011/usr/lib/knot-resolver/kres_modules/http.lua:177: in function </usr/lib/knot-resolver/kres_modules/http.lua:160>
#011[C]: in function 'yieldable_pcall'
#011/usr/lib/knot-resolver/kres_modules/http.lua:232: in function </usr/lib/knot-resolver/kres_modules/http.lua:207>
#011[C]: in function 'yieldable_pcall'
#011/usr/share/lua/5.1/http/server.lua:159: in function </usr/share/lua/5.1/http/server.lua:158>
```
The frequency of these log events varies but we see multiple occurrences with a single browser.
firefox (network.trr.useGET = true) -> nginx -> knot-resolver
knot-resolver 4.0.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/466move docker image to registry.labs.nic.cz2021-11-25T16:26:11+01:00Tomas Krizekmove docker image to registry.labs.nic.czDocker image for knot-resolver should be moved to our own upstream registry. The effect for end users would be to switch the image name from `cznic/knot-resolver` to something like `registry.labs.nic.cz/knot/knot-resolver`
The issues wi...Docker image for knot-resolver should be moved to our own upstream registry. The effect for end users would be to switch the image name from `cznic/knot-resolver` to something like `registry.labs.nic.cz/knot/knot-resolver`
The issues with current setup in docker hub:
- after their recent "update", automated build require **administrative** access to source code repository
> This service account should have access to any repositories to be built, and must have administrative access to the source code repositories so it can manage deploy keys. (source: https://docs.docker.com/docker-hub/builds/#service-users-for-team-autobuilds )
I have no idea what is "managing deploy keys" and why an administrative access to make a build from publicly pushed branch / tag would even be required in the first place.
- providing docker hub with unneeded privileges goes against good security practices and ends up as one would expect (https://news.ycombinator.com/item?id=19763413)
---
Since we already have our own registry and CI/CD infrastructure, I think we should take advantage of it and use it for docker image builds for both latest master branch and tagged versions.
This would fix the currently broken automation of image builds and also simplify the entire process (using docker hub requires github, so we need to mirror there first, then build an image from there...)
@dsalzman Do you think this would make sense for Knot DNS image as well?https://gitlab.nic.cz/knot/knot-resolver/-/issues/470SERVFAIL when serving from cache, don't know how to debug2019-12-18T15:39:25+01:00ValdikSSSERVFAIL when serving from cache, don't know how to debugI'm running knot-resolver 3.2.1-3~bpo9+1, Debian stretch-backports.
From time to time, resolving random domain names return SERVFAIL, which is being put into knot-resolver's cache.
Running `dig +trace` to such domains usually return look...I'm running knot-resolver 3.2.1-3~bpo9+1, Debian stretch-backports.
From time to time, resolving random domain names return SERVFAIL, which is being put into knot-resolver's cache.
Running `dig +trace` to such domains usually return lookup errors even earlier in a chain.
If I clear cache with `cache.clear()`, DNS works again as expected.
I don't know how to debug this issue and what could be the cause. How can I provide more logs to fix this issue?
My configuration:
```
user('knot-resolver','knot-resolver')
cache.size = 300 * MB
modules = { 'workarounds < iterate', 'stats', 'bogus_log' }
dofile("/etc/knot-resolver/knot-aliases-alt.conf")
policy.add(
policy.suffix(
policy.STUB(
{'127.0.0.4'}
),
policy.todnames(blocked_hosts)
)
)
```
Where `/etc/knot-resolver/knot-aliases-alt.conf` is a file with single `blocked_hosts={}` table with lots of hosts. It shouldn't affect DNS lookups and this issue.
Before clearing the cache:
```
# dig jprosto.ru
; <<>> DiG 9.10.3-P4-Debian <<>> jprosto.ru
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8684
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;jprosto.ru. IN A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Apr 28 14:40:51 CEST 2019
;; MSG SIZE rcvd: 39
# dig +trace jprosto.ru
; <<>> DiG 9.10.3-P4-Debian <<>> +trace jprosto.ru
;; global options: +cmd
. 120086 IN NS a.root-servers.net.
. 120086 IN NS b.root-servers.net.
. 120086 IN NS c.root-servers.net.
. 120086 IN NS d.root-servers.net.
. 120086 IN NS e.root-servers.net.
. 120086 IN NS f.root-servers.net.
. 120086 IN NS g.root-servers.net.
. 120086 IN NS h.root-servers.net.
. 120086 IN NS i.root-servers.net.
. 120086 IN NS j.root-servers.net.
. 120086 IN NS k.root-servers.net.
. 120086 IN NS l.root-servers.net.
. 120086 IN NS m.root-servers.net.
. 120086 IN RRSIG NS 8 0 518400 20190506170000 20190423160000 25266 . tRFeXF0ccHkCHTB11jEKDzXtoQtiSrCDX3GRzqyLvl2D5+ML6yqEkYTc e9Bs2sKYmXFk2pdldVbub3n0IQTXAW5MSuWDWqv/WtCA5v6FCCJTXCm+ mGDSKEbTdfLJDfzxYunWUKo1sYCs2d8im5LFs0RJMY/1EIngrJK1ujkj JrSXZjdmlaUv1cTBIXuV/Xn3CansYP3wOwIY3W4fOVYgfLAE1MEvnAUR 0xxjFj1eXNuv3wYE5mYGtumYL1fPHiU/XAIACZj3FWdWiG2loDz/u+ty zGPB6t+Ms7DKbaFp7EiWskWL60zWzxHcd3vxOUL0o0Ic+8csLqL6tO1h zJA3nA==
;; Received 717 bytes from 127.0.0.1#53(127.0.0.1) in 0 ms
ru. 172800 IN NS a.dns.ripn.net.
ru. 172800 IN NS b.dns.ripn.net.
ru. 172800 IN NS d.dns.ripn.net.
ru. 172800 IN NS e.dns.ripn.net.
ru. 172800 IN NS f.dns.ripn.net.
ru. 86400 IN DS 15506 8 2 331CBB1932E7CF201F81AB299EF8711AD7175E8812508679E475930C 2B145C97
ru. 86400 IN RRSIG DS 8 1 86400 20190511050000 20190428040000 25266 . nmGftS2ztiLhDImmEPgPAOnoBKrwOpARMkP03EJ4kyIGgGOESH5ePJDX bKiU74vp68hBetKPC8toxtBCD4Q6s7cYxelSKpuuchAvbT1V+6KQMdMp mhuLc9ix1A0PsmWr78ZrjngKSqmgg4lFW1Kgy1wxnHXicdGeyK4Gk0Tm Fb1AivBjgjnMY/KaV2ylocCKePIW+fT666ReFf2RteIdSTPHwqFfBj3s QuoZS+lSlMPrwM+Npj60hv/BE+B8tTzJxCQuTZf4talUND10ySUuEJqa GuSngvz8UY9HznZTHSyUn21orZggJcdTLFS3CpYsxU6tee4NjHlBG3sT hkvz1Q==
couldn't get address for 'a.dns.ripn.net': failure
couldn't get address for 'b.dns.ripn.net': failure
couldn't get address for 'd.dns.ripn.net': failure
couldn't get address for 'e.dns.ripn.net': failure
couldn't get address for 'f.dns.ripn.net': failure
dig: couldn't get address for 'a.dns.ripn.net': no more
```
After clearing the cache:
```
> cache.clear()
[count] => 675199
# dig jprosto.ru
; <<>> DiG 9.10.3-P4-Debian <<>> jprosto.ru
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29704
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;jprosto.ru. IN A
;; ANSWER SECTION:
jprosto.ru. 300 IN A 5.101.152.156
;; Query time: 750 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Apr 28 14:44:18 CEST 2019
;; MSG SIZE rcvd: 55
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/472running without any trust anchors leads to insufficient caching2020-04-27T16:33:18+02:00Vladimír Čunátvladimir.cunat@nic.czrunning without any trust anchors leads to insufficient cachingRR ranks are confused: _some_ records get cached with `KR_RANK_TRY`, but that's insufficient for using in answers &ndash; at least `KR_RANK_INSECURE` and that would be the correct rank in this case IMO.
_It's a low-priority configuratio...RR ranks are confused: _some_ records get cached with `KR_RANK_TRY`, but that's insufficient for using in answers – at least `KR_RANK_INSECURE` and that would be the correct rank in this case IMO.
_It's a low-priority configuration for us._https://gitlab.nic.cz/knot/knot-resolver/-/issues/474prefill crashes on empty zone file2019-07-09T14:20:05+02:00Petr Špačekprefill crashes on empty zone filehttps://lists.nic.cz/pipermail/knot-resolver-users/2019/000147.html
This occurs if for some reason the prefill file happens to be empty:
kresd[11812]: [prefill] root zone file valid for 17 hours 01 minutes,
reusing data from disk
kresd...https://lists.nic.cz/pipermail/knot-resolver-users/2019/000147.html
This occurs if for some reason the prefill file happens to be empty:
kresd[11812]: [prefill] root zone file valid for 17 hours 01 minutes,
reusing data from disk
kresd[11812]: segfault at 0 ip 00007f9b06017436 sp 00007ffc3142bb58
error 4 in libc-2.28.so[7f9b05fa1000+148000]
Apr 30 20:26:13 scruffy kernel: Code: 0f 1f 40 00 66 0f ef c0 66 0f ef
c9 66 0f ef d2 66 0f ef db 48 89 f8 48 89 f9 48 81 e1 ff 0f 00 00 48 81
f9 cf 0f 00 00 77 6a <f3> 0f 6f 20 66 0f 74 e0 66 0f d7 d4 85 d2 74 04
0f bc c2 c3 48 83
This happens in a loop until systemd gives up trying to start kresd.
Solved by removing /var/cache/knot-resolver/root.zone (0 bytes).
kresd version: 4.0.0
We use your example config from:
https://knot-resolver.readthedocs.io/en/stable/modules.html#cache-prefillingIvana KrumlovaIvana Krumlovahttps://gitlab.nic.cz/knot/knot-resolver/-/issues/476ulimit -n2020-01-07T14:45:12+01:00Vladimír Čunátvladimir.cunat@nic.czulimit -n##### Problem
Very often the number of file-descriptors is limited quite low by default. Consequently, kresd's _uncached_ QPS may be unnecessarily limited by that (lots of SERVFAILs), at least by default.
##### Details
The limits I oft...##### Problem
Very often the number of file-descriptors is limited quite low by default. Consequently, kresd's _uncached_ QPS may be unnecessarily limited by that (lots of SERVFAILs), at least by default.
##### Details
The limits I often see on Linux: 1024 soft + 4096 hard, which seems ridiculous for typical resources of nowadays machines. We open a new FD for every UDP packet upstream in order to maximize entropy from port randomization.
I expect the problem is partially mitigated by the fact that these limits apply per-process, but even so – it seems easy to improve the defaults at least a bit.
##### What we can do:
- [x] `LimitNOFILE=foo` in `kresd@.service`
- [x] document it somewhere
- [x] (maybe) use `ulimit()` or similar to let kresd increase it – just moving from 1024 to 4096 seems quite a substantial improvement, and 4096 even seems OK-ish for some cases I tested
- [ ] (possibly, in future) in case of plaintext forwarding, automatically prefer TCP when QPS gets high and/or getting problems like `EMFILE` errors. Users behind some NATs are also severely limited in terms of "concurrent connection count".
Thoughts?5.0.0Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/477knot-resolver 4 won't build on macos because of ld: -pagezero_size option2019-05-16T10:33:23+02:00Jayson Reisknot-resolver 4 won't build on macos because of ld: -pagezero_size optionHi there, I am trying to compile the new version on macos but it seems to be failing because of [this issue](https://github.com/Homebrew/homebrew-core/issues/37169), even though the workaround is on meson.build it seems to be having the ...Hi there, I am trying to compile the new version on macos but it seems to be failing because of [this issue](https://github.com/Homebrew/homebrew-core/issues/37169), even though the workaround is on meson.build it seems to be having the same effect.
Steps to reproduce:
```
git clone https://github.com/CZ-NIC/knot-resolver.git
...
The Meson build system
Version: 0.50.1
Source dir: /Users/jayson/src/knot-resolver
Build dir: /Users/jayson/src/knot-resolver/build
Build type: native build
Project name: knot-resolver
Project version: 4.0.0
Native C compiler: cc (clang 10.0.1 "Apple LLVM version 10.0.1 (clang-1001.0.46.4)")
Native C++ compiler: c++ (clang 10.0.1 "Apple LLVM version 10.0.1 (clang-1001.0.46.4)")
Build machine cpu family: x86_64
Build machine cpu: x86_64
Message: --- required dependencies ---
Found pkg-config: /usr/local/bin/pkg-config (0.29.2)
Dependency libknot found: YES 2.8.1
Dependency libdnssec found: YES 2.8.1
Dependency libzscanner found: YES 2.8.1
Dependency libuv found: YES 1.28.0
Found CMake: /usr/local/bin/cmake (3.14.3)
Dependency lmdb found: NO (tried pkgconfig, cmake and framework)
Library lmdb found: YES
Dependency gnutls found: YES 3.6.7
Dependency luajit found: YES 2.0.5
Message: ------------------------------
Message: --- systemd socket activation ---
Dependency libsystemd found: NO (tried pkgconfig, cmake and framework)
Message: ---------------------------
Configuring kresconfig.h using configuration
Message: --- client dependencies ---
Dependency libedit found: NO (tried pkgconfig, cmake and framework)
Library edit found: YES
Message: ---------------------------
Configuring trust_anchors.lua using configuration
Configuring sandbox.lua using configuration
Program ./kres-gen.sh found: YES (/Users/jayson/src/knot-resolver/daemon/lua/./kres-gen.sh)
Message: --- dnstap module dependencies ---
Dependency libprotobuf-c found: YES 1.3.1
Dependency libfstrm found: YES 0.5.0
Program protoc-c found: YES (/usr/local/bin/protoc-c)
Message: ----------------------------------
Configuring http.lua using configuration
Message: --- unit_tests dependencies ---
Dependency cmocka found: YES 1.1.5
Message: -------------------------------
Configuring kresd.8 using configuration
Program ../scripts/make-doc.sh found: YES (/Users/jayson/src/knot-resolver/doc/../scripts/make-doc.sh)
Configuring config.cluster using configuration
Configuring config.docker using configuration
Configuring config.isp using configuration
Configuring config.personal using configuration
Configuring config.splitview using configuration
Configuring kresd.conf using configuration
Message: --- lint dependencies ---
Program clang-tidy found: NO
Program luacheck found: NO
Program flake8 found: NO
Program scripts/run-pylint.sh found: YES (/Users/jayson/src/knot-resolver/scripts/run-pylint.sh)
Message: -------------------------
Message:
======================= SUMMARY =======================
paths
prefix: /usr/local
lib_dir: /usr/local/lib/knot-resolver
sbin_dir: /usr/local/sbin
etc_dir: /usr/local/etc/knot-resolver
root.hints: /usr/local/etc/knot-resolver/root.hints
trust_anchors
keyfile_default: /usr/local/etc/knot-resolver/root.keys
managed_ta: enabled
systemd:
socket activation: disabled
files: disabled
work_dir:
optional components
client: enabled
dnstap: enabled
unit_tests: enabled
config_tests: disabled
extra_tests: disabled
additional
user: knot-resolver
group: knot-resolver
install_kresd_conf: enabled
=======================================================
Build targets in project: 27
Found ninja-1.9.0 at /usr/local/bin/ninja
ninja -C build
ninja: Entering directory `build'
[46/100] Compiling C object 'daemon/f77b12a@@kresd@exe/bindings_net.c.o'.
../daemon/bindings/net.c:918:17: warning: unused variable 'engine' [-Wunused-variable]
struct engine *engine = engine_luaget(L);
^
../daemon/bindings/net.c:951:17: warning: unused variable 'engine' [-Wunused-variable]
struct engine *engine = engine_luaget(L);
^
2 warnings generated.
[52/100] Linking target lib/libkres.9.dylib.
FAILED: lib/libkres.9.dylib
cc -o lib/libkres.9.dylib 'lib/76b5a35@@kres@sha/cache_api.c.o' 'lib/76b5a35@@kres@sha/cache_cdb_lmdb.c.o' 'lib/76b5a35@@kres@sha/cache_entry_list.c.o' 'lib/76b5a35@@kres@sha/cache_entry_pkt.c.o' 'lib/76b5a35@@kres@sha/cache_entry_rr.c.o' 'lib/76b5a35@@kres@sha/cache_knot_pkt.c.o' 'lib/76b5a35@@kres@sha/cache_nsec1.c.o' 'lib/76b5a35@@kres@sha/cache_nsec3.c.o' 'lib/76b5a35@@kres@sha/cache_peek.c.o' 'lib/76b5a35@@kres@sha/dnssec.c.o' 'lib/76b5a35@@kres@sha/dnssec_nsec.c.o' 'lib/76b5a35@@kres@sha/dnssec_nsec3.c.o' 'lib/76b5a35@@kres@sha/dnssec_signature.c.o' 'lib/76b5a35@@kres@sha/dnssec_ta.c.o' 'lib/76b5a35@@kres@sha/generic_lru.c.o' 'lib/76b5a35@@kres@sha/generic_map.c.o' 'lib/76b5a35@@kres@sha/generic_queue.c.o' 'lib/76b5a35@@kres@sha/generic_trie.c.o' 'lib/76b5a35@@kres@sha/layer_cache.c.o' 'lib/76b5a35@@kres@sha/layer_iterate.c.o' 'lib/76b5a35@@kres@sha/layer_validate.c.o' 'lib/76b5a35@@kres@sha/module.c.o' 'lib/76b5a35@@kres@sha/nsrep.c.o' 'lib/76b5a35@@kres@sha/resolve.c.o' 'lib/76b5a35@@kres@sha/rplan.c.o' 'lib/76b5a35@@kres@sha/utils.c.o' 'lib/76b5a35@@kres@sha/zonecut.c.o' -Wl,-dead_strip_dylibs -Wl,-headerpad_max_install_names -shared -install_name @rpath/libkres.9.dylib -compatibility_version 9 -current_version 9 contrib/libcontrib.a /usr/local/Cellar/libuv/1.28.0/lib/libuv.dylib -lpthread -ldl -llmdb /usr/local/Cellar/knot/2.8.1/lib/libknot.dylib /usr/local/Cellar/knot/2.8.1/lib/libdnssec.dylib /usr/local/Cellar/gnutls/3.6.7.1/lib/libgnutls.dylib -pagezero_size 10000 -image_base 100000000 /usr/local/Cellar/luajit/2.0.5/lib/libluajit-5.1.dylib -Wl,-headerpad_max_install_names -Wl,-rpath,@loader_path/../contrib -Wl,-rpath,/usr/local/Cellar/knot/2.8.1/lib -Wl,-rpath,/usr/local/Cellar/gnutls/3.6.7.1/lib
ld: -pagezero_size option can only be used when linking a main executable
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[65/100] Compiling C++ object 'modules/policy/6a19ea2@@ahocorasick@sha/lua-aho-corasick_ac_fast.cxx.o'.
ninja: build stopped: subcommand failed.
```
THank you in advancehttps://gitlab.nic.cz/knot/knot-resolver/-/issues/478Handling of PTR records in DNS64 module2021-08-25T13:32:54+02:00Ondřej CaletkaHandling of PTR records in DNS64 moduleI know this omision is [documented](https://knot-resolver.readthedocs.io/en/stable/modules.html?highlight=PTR+synthesis#dns64), but still [RFC 6147](https://tools.ietf.org/html/rfc6147#section-5.3.1) requires proper handling of PTR recor...I know this omision is [documented](https://knot-resolver.readthedocs.io/en/stable/modules.html?highlight=PTR+synthesis#dns64), but still [RFC 6147](https://tools.ietf.org/html/rfc6147#section-5.3.1) requires proper handling of PTR records for the DNS64 translation prefix.
Since DNS64-enabled instances of Knot resolver are being deployed both by [Cloudflare](https://developers.cloudflare.com/1.1.1.1/support-nat64/) and [RIPE NCC](https://ripe78.ripe.net/on-site/tech-info/ipv6-only-network/), it would help a lot, especially during tracerouting, to have the PTR handling implemented properly.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/480"attempt to call a string value" in lua-http2019-11-01T14:50:43+01:00Héctor Molinero Fernández"attempt to call a string value" in lua-httpHi,
I'm having the following exception with lua-http:
```
[worker.background] error: /usr/local/share/lua/5.1/http/hpack.lua:0: attempt to call a string value stack traceback:
/usr/local/share/lua/5.1/http/hpack.lua: in function 'decod...Hi,
I'm having the following exception with lua-http:
```
[worker.background] error: /usr/local/share/lua/5.1/http/hpack.lua:0: attempt to call a string value stack traceback:
/usr/local/share/lua/5.1/http/hpack.lua: in function 'decode_header_helper'
/usr/local/share/lua/5.1/http/hpack.lua:836: in function 'decode_headers'
/usr/local/share/lua/5.1/http/h2_stream.lua:467: in function 'handler'
/usr/local/share/lua/5.1/http/h2_connection.lua:219: in function 'handle_frame'
/usr/local/share/lua/5.1/http/h2_connection.lua:260: in function 'step'
/usr/local/share/lua/5.1/http/h2_connection.lua:342: in function 'get_next_incoming_stream'
/usr/local/share/lua/5.1/http/server.lua:155: in function </usr/local/share/lua/5.1/http/server.lua:132>
```
I have tried to build Knot Resolver `4.0.0` with lua-http `0.3` and `scm-0` and in both the same problem occurs to me. The curious thing is that it only happens in an `arm64v8` server, it works properly in `x86_64`.
The problem can be reproduced with this Docker image: https://github.com/hectorm/hblock-resolverhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/482Not-yet-valid signature causes SERVFAIL + data in answer section2019-06-20T17:16:44+02:00Petr ŠpačekNot-yet-valid signature causes SERVFAIL + data in answer sectionReproducer:
```
# dig @::1 signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl +rrcomments
; <<>> DiG 9.11.3-1ubuntu1.7-Ubuntu <<>> @::1 signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl +rrcomments
; (1 server found)
;; global options: +cmd
;...Reproducer:
```
# dig @::1 signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl +rrcomments
; <<>> DiG 9.11.3-1ubuntu1.7-Ubuntu <<>> @::1 signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl +rrcomments
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 5493
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl. IN A
;; ANSWER SECTION:
signotincepted.ok.ok.bad-dnssec.wb.sidnlabs.nl. 3600 IN A 94.198.159.39
;; Query time: 952 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Jun 20 14:09:32 UTC 2019
;; MSG SIZE rcvd: 91
```
We should return no data on DNSSEC validation errors (except for +CD bit).https://gitlab.nic.cz/knot/knot-resolver/-/issues/484Can't keep systemd sockets open on Ubuntu 18.04.22019-12-18T15:23:07+01:00Michael AdamsCan't keep systemd sockets open on Ubuntu 18.04.2Came across this systemd "bug" tonight, while trying to setup Knot Resolver 2.1.1 on Ubuntu 18.04.2. The Github page in question explicitly calls out a documentation conflict between **systemd** and **kresd** for socket clearing behavior...Came across this systemd "bug" tonight, while trying to setup Knot Resolver 2.1.1 on Ubuntu 18.04.2. The Github page in question explicitly calls out a documentation conflict between **systemd** and **kresd** for socket clearing behavior.
[using Sockets= in a drop-in for instantiated service doesn't clear the list of previously defined sockets](https://github.com/systemd/systemd/issues/12415)https://gitlab.nic.cz/knot/knot-resolver/-/issues/485drop systemd socket activation support2020-01-27T12:13:19+01:00Tomas Krizekdrop systemd socket activation supportReplace systemd socket activation with old-style network interface configuration in config file.
`CAP_NET_BIND_SERVICE` should be added during service startup and dropped once sockets are bound via `net.listen()`
For more details, see ...Replace systemd socket activation with old-style network interface configuration in config file.
`CAP_NET_BIND_SERVICE` should be added during service startup and dropped once sockets are bound via `net.listen()`
For more details, see discussion on mailing list: https://lists.nic.cz/pipermail/knot-resolver-users/2019/000182.html
Related: #484 #342 #445
#### Related changes
(preliminary plans)
- [x] failing `net.listen()` should throw a lua error and therefore fail kresd if specified in configuration (by default)
- [x] allow specifying `FREEBIND` in `net.listen()`
- [x] unify TTY sockets, e.g. `net.listen('path', nil, { kind = 'control' })`
- [x] add distro-specific preconfig (control socket location, cache size)
- [x] use upgrade script to suggest updates to config and test in various envs5.0.0Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/486DoH cycling2019-12-18T15:33:21+01:00Vladimír Čunátvladimir.cunat@nic.czDoH cyclingWith kresd 4.0.0 we've seen it occasionally slip into a loop consuming 100% CPU, becoming unresponsive. So far we've been unable to intentionally reproduce that, and apparently it would never happen with Firefox DoH as the client in com...With kresd 4.0.0 we've seen it occasionally slip into a loop consuming 100% CPU, becoming unresponsive. So far we've been unable to intentionally reproduce that, and apparently it would never happen with Firefox DoH as the client in combination with recent lua library versions on kresd side. So far it's very unclear what's the underlying problem.
Technical details [note to self]: the cqueues event loop got 100% busy with processing immediate actions, so even the concept of "time now" got never updated and other future events wouldn't be processed.https://gitlab.nic.cz/knot/knot-resolver/-/issues/487dnssec and PTR zones in knot dns resolver2019-07-16T13:39:29+02:00signupforacommentdnssec and PTR zones in knot dns resolverHi,
maybe first the setup. Running a Debian Jessie / Stretch and using NSD as nameserver. NSD serves a number of purely internal zones, e.g. "mydomain.dmz" and "mydomain.pub" and so on.
Then installing knot dns resolver from the reposito...Hi,
maybe first the setup. Running a Debian Jessie / Stretch and using NSD as nameserver. NSD serves a number of purely internal zones, e.g. "mydomain.dmz" and "mydomain.pub" and so on.
Then installing knot dns resolver from the repository you provide and get it running, e.g. by a config like:
```
--
-- Bind works well
--
net = { '127.0.0.1', '::1' }
net.listen ( net.eth0 )
user ( 'bind', 'bind' )
cache.size = 25*MB
modules = {
'hints',
'policy',
'view'
}
cache.open (25 * MB, 'lmdb:///var/run/knot-resolver')
cache.size = 25 * MB
trust_anchors.add_file ('/usr/share/dns/root.key', 'readonly=true')
modules = {
'hints',
'policy',
'view'
}
LocalDomains = policy.todnames ({
'example.dmz',
'example.pub',
'10.168.192.in-addr.arpa',
'11.168.192.in-addr.arpa',
'0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa',
'0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa'
})
trust_anchors.set_insecure ({
'10.168.192.in-addr.arpa',
'11.168.192.in-addr.arpa',
'0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa',
'0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa' })
trust_anchors.add_file ('/etc/nsd/ksk/Kexample.dmz.key', readonly)
trust_anchors.add_file ('/etc/nsd/ksk/Kexample.pub.key', readonly)
policy.add (
policy.suffix (
policy.FORWARD ({ '192.168.10.1@5353', '192.168.10.2@5353' }), LocalDomains
)
)
policy.add (
policy.all (
policy.FORWARD ({ '8.8.8.8', '8.8.4.4' })
)
)
```
So far so good. The keys are created with "ldns-keygen -a RSASHA256 -b 2048 -k example.com".
Doing a "dig" on some host, everything works as expected and I get a signed response. Doing a PTR dig, everything works well and I get a non-signed response. Doing a PTR dig on the IPv6, I get a "SERVFAIL" by knot dns resolver.
Doing the PTR IPv6 dig directly on NSD, I get a response.
E.g.
```
dig something.example.com
; <<>> DiG 9.14.3 <<>> something.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22849
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;something.example.com. IN A
;; ANSWER SECTION:
something.example.com. 251185 IN A 192.168.11.5
;; Query time: 1 msec
;; SERVER: 192.168.10.1#53(192.168.10.1)
;; WHEN: lun. juil. 15 19:56:35 CEST 2019
;; MSG SIZE rcvd: 63
dig -t PTR 5.11.168.192.in-addr.arpa
; <<>> DiG 9.14.3 <<>> -t PTR 5.11.168.192.in-addr.arpa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34131
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;5.11.168.192.in-addr.arpa. IN PTR
;; ANSWER SECTION:
5.11.168.192.in-addr.arpa. 251159 IN PTR something.example.com.
;; Query time: 1 msec
;; SERVER: 192.168.10.1#53(192.168.10.1)
;; WHEN: lun. juil. 15 19:56:58 CEST 2019
;; MSG SIZE rcvd: 86
dig -t PTR 5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 45840
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa. IN PTR
;; Query time: 2 msec
;; SERVER: 192.168.10.1#53(192.168.10.1)
;; WHEN: lun. juil. 15 19:59:41 CEST 2019
;; MSG SIZE rcvd: 101
dig -t PTR 5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa @192.168.10.1 -p5353
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40316
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa. IN PTR
;; ANSWER SECTION:
5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa. 259200 IN PTR radius.arrishq.dmz.
;; AUTHORITY SECTION:
0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa. 259200 IN NS ns1.example.dmz.
0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.3.a.2.e.4.1.8.8.0.4.d.f.ip6.arpa. 259200 IN NS ns2.example.dmz.
;; Query time: 1 msec
;; SERVER: 192.168.10.1#5353(192.168.10.1)
;; WHEN: lun. juil. 15 20:03:24 CEST 2019
;; MSG SIZE rcvd: 169
```
Disabling dnssec in the knot dns resolver configuration and changing nothing else, thing start working as expected.
The used version is "4.1.0", with the only change that I manually overwrite the systemd dependency because I'm using a different init system.
Loading kresd directly via CLI:
```
[system] bind to 'fe80::abcd:edfg:hijk:lmno@53' (UDP): Invalid argument
[ ta ] warning: overriding previously set trust anchors for .
[system] interactive mode
> [ta_update] refreshing TA for example.dmz.
[ta_update] refreshing TA for example.pub.
[ta_update] key: 49312 state: Valid
[ta_update] next refresh for example.dmz. in 1 hours
[ta_update] key: 14881 state: Valid
[ta_update] next refresh for example.pub. in 1 hours
```
Maybe it's just a layer 8 problem, maybe.https://gitlab.nic.cz/knot/knot-resolver/-/issues/489[tls_client] session resumption doesn't work when server sends session ticket...2020-01-14T18:50:52+01:00Tomas Krizek[tls_client] session resumption doesn't work when server sends session ticket along with other dataWhen using `policy.TLS_FORWARD` against a kresd that was compiled with new-enough gnutls to supports TLS 1.3 (Arch, Debian 10, ...), session resumption doesn't work.
When the connection is established for the first time, queries are an...When using `policy.TLS_FORWARD` against a kresd that was compiled with new-enough gnutls to supports TLS 1.3 (Arch, Debian 10, ...), session resumption doesn't work.
When the connection is established for the first time, queries are answered. When this connection is closed (usually after TCP keepalive expires, ~10secs), it can no longer be re-established and forwarding stops working. Log contains many attempts to reconnect, all ending up with the following error:
```
[tls_client] TLS handshake with 127.0.0.1#00853 has completed
[tls_client] TLS session has resumed
[gnutls] (5) REC[0x55989bcfd8f0]: Preparing Packet Application Data(23) with length: 33 and min pad: 0
[gnutls] (5) REC[0x55989bcfd8f0]: Sent Packet[1] Application Data(23) in epoch 2 and length: 55
[gnutls] (3) ASSERT: buffers.c[_gnutls_io_read_buffered]:589
[gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1777
[gnutls] (5) REC[0x55989bcfd8f0]: SSL 3.3 Application Data packet received. Epoch 2, length: 268
[gnutls] (5) REC[0x55989bcfd8f0]: Expected Packet Application Data(23)
[gnutls] (5) REC[0x55989bcfd8f0]: Received Packet Application Data(23) with length: 268
[gnutls] (5) REC[0x55989bcfd8f0]: Decrypted Packet[0] Handshake(22) with length: 251
[gnutls] (3) ASSERT: buffers.c[get_last_packet]:1170
[gnutls] (4) HSK[0x55989bcfd8f0]: NEW SESSION TICKET (4) was received. Length 247[247], frag offset 0, frag length: 247, sequence: 0
[gnutls] (3) ASSERT: buffers.c[_gnutls_handshake_io_recv_int]:1431
[gnutls] (4) HSK[0x55989bcfd8f0]: parsing session ticket message
[gnutls] (3) ASSERT: record.c[_gnutls_recv_in_buffers]:1579
[gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1777
[io] => connection to '127.0.0.1#00853': error processing TLS data, close
```
The resolver attempt the same resolution multiple times with the same result, and ultimately answers the client with SERVFAIL. Cached queries are still answered correctly.
This can be easily to reproducible when both client and fwd target resolver are compiled with gnutls>3.6 and these configs are used:
```
# kresd_fwd_target.conf
net.listen('127.0.0.1', 853, { kind = 'tls' })
```
```
# kresd.conf
net.listen('127.0.0.1', 53535)
policy.add(policy.all(policy.TLS_FORWARD({
{'127.0.0.1', insecure=true}
})))
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/490req->add_selected is ignored2019-07-24T17:12:26+02:00Petr Špačekreq->add_selected is ignoredValues stored in `req->add_selected` are being ignored so modules cannot add stuff into additional section. That's unfortunate, we need to fix this.Values stored in `req->add_selected` are being ignored so modules cannot add stuff into additional section. That's unfortunate, we need to fix this.https://gitlab.nic.cz/knot/knot-resolver/-/issues/491how can i run the lua cache test?2020-11-16T12:31:57+01:00beckhamaaahow can i run the lua cache test?**i want to insert RR to cache by lua script, but i don't know how can i invoke? there is the source in the directory: /tests/config/cache.test.lua**
`-- test access to cache through context
local function test_context_cache()
local c ...**i want to insert RR to cache by lua script, but i don't know how can i invoke? there is the source in the directory: /tests/config/cache.test.lua**
`-- test access to cache through context
local function test_context_cache()
local c = kres.context().cache
is(type(c), 'cdata', 'context has a cache object')
local s = c.stats
isnt(s.read and s.read_miss and s.write, 'context cache stats works')
-- insert a record into cache
local rdata = '\1\2\3\4'
local rr = kres.rrset('\3com\0', kres.type.A, kres.class.IN, 66)
rr:add_rdata(rdata, #rdata)
local s_write = s.write
ok(c:insert(rr, nil, 0, 0), 'cache insertion works')
ok(c:commit(), 'cache commit works')
isnt(s.write, s_write, 'cache insertion increments counters')
end`
**i invoke it when the kresd start,suffer the error:**
`modules = { 'insert' }
0
1
error: ERROR: Function not implemented
stack traceback:
[C]: in function 'get'
.../local/kr/lib/x86_64-linux-gnu/knot-resolver/sandbox.lua:251: in function '__index'
...b/x86_64-linux-gnu/knot-resolver/kres_modules/insert.lua:99: in function 'test_context_cache'
...b/x86_64-linux-gnu/knot-resolver/kres_modules/insert.lua:108: in main chunk
[C]: at 0x7f28bc6addf0
[C]: in function 'load'
.../local/kr/lib/x86_64-linux-gnu/knot-resolver/sandbox.lua:147: in function '__newindex'
.../local/kr/lib/x86_64-linux-gnu/knot-resolver/sandbox.lua:300: in function '__newindex'
[string "modules = { 'insert' }"]:1: in main chunk
ERROR: No such file or directory
stack traceback:
[C]: in function 'load'
.../local/kr/lib/x86_64-linux-gnu/knot-resolver/sandbox.lua:147: in function '__newindex'
.../local/kr/lib/x86_64-linux-gnu/knot-resolver/sandbox.lua:300: in function '__newindex'
[string "modules = { 'insert' }"]:1: in main chunk
`https://gitlab.nic.cz/knot/knot-resolver/-/issues/493Resolver stops working and returns SERVFAIL until restarted2022-02-04T17:49:26+01:00ValdikSSResolver stops working and returns SERVFAIL until restartedSome time after normal operation, knot-resolver stops resolving any domains and returns SERVFAIL on all DNS queries.
I have the following configuration:
```
# cat /etc/knot-resolver/kresd.conf
user('knot-resolver','knot-resolver')
cache...Some time after normal operation, knot-resolver stops resolving any domains and returns SERVFAIL on all DNS queries.
I have the following configuration:
```
# cat /etc/knot-resolver/kresd.conf
user('knot-resolver','knot-resolver')
cache.size = 300 * MB
net.ipv6 = false
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- minimum TTL = 2 minutes
cache.min_ttl(120)
dofile("/etc/knot-resolver/knot-aliases-alt.conf")
policy.add(
policy.suffix(
policy.STUB(
{'127.0.0.4'}
),
policy.todnames(blocked_hosts)
)
)
# cat /etc/knot-resolver/knot-aliases-alt.conf
blocked_hosts = {
"0000a-fast-proxy.de.",
"002cc20.icu.",
"007ingyenletoltes.hu.",
"007rc.biz.",
"007slots.com.",
"00seeds.com.",
"010119azino777.com.",
"010119azino777.ru.",
…
"zzzes.ru.",
"zzztorrent.net.",
"zzzz1.live.",
"zzzz2.live.",
}
```
Both normal recursive queries and queries which should be forwarded to 127.0.0.4 (from blocked_hosts) fail to work.
I've just enabled verbose logging to monitor the issue, but the log seems to buffer a lot. I see new information in journald's journalctl in spikes, a large log every 30 seconds or so. I'm not sure if this is some sort of cache and is to be expected, or it shows some kind of lock problem.
It even triggered a watchdog once:
```
systemd[1]: kresd@1.service: Watchdog timeout (limit 10s)!
systemd[1]: kresd@1.service: Killing process 23036 (kresd) with signal SIGABRT.
systemd[1]: kresd@1.service: Main process exited, code=killed, status=6/ABRT
systemd[1]: kresd@1.service: Unit entered failed state.
systemd[1]: kresd@1.service: Failed with result 'watchdog'.
systemd[1]: kresd@1.service: Service hold-off time over, scheduling restart.
```
The issue happens irregularly. It used to works fine for weeks but in the last 3 days it happened for 3 times. Sometimes it takes dozens of hours, some time only several minutes. I did not update the configuration and updated the software only after second time. It happens on 4.1.0.
Right now I'm running verbose logging and will update this issue when it happens again.https://gitlab.nic.cz/knot/knot-resolver/-/issues/495improve error reporting and handling2021-06-01T11:02:38+02:00Tomas Krizekimprove error reporting and handlingCurrently, some assertions seem to be used as a way to report unlikely events, and when these are used in production, they can cause needless crashes (even though they're then handled by systemd's `Restart=on-abnormal` facility)
I propo...Currently, some assertions seem to be used as a way to report unlikely events, and when these are used in production, they can cause needless crashes (even though they're then handled by systemd's `Restart=on-abnormal` facility)
I propose the following changes:
- The code should not rely on assertions, if it does, it's a bug that should be fixed.
- Errors, even unlikely ones (currently handled by assertions) should be logged properly.
- ~~There could be an option (off by default) to enable reporting these remotely.~~Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/496config.prefill tests fail due to expired RRSIGs2019-08-23T11:12:10+02:00Tomas Krizekconfig.prefill tests fail due to expired RRSIGsZone signatures added in !840 have expired and test is now failing. While it would be possible to add libfaketime, I think it's an overkill. The zone should be re-signed to extend the signatures to something like 20 years in the future.Zone signatures added in !840 have expired and test is now failing. While it would be possible to add libfaketime, I think it's an overkill. The zone should be re-signed to extend the signatures to something like 20 years in the future.Ivana KrumlovaIvana Krumlovahttps://gitlab.nic.cz/knot/knot-resolver/-/issues/497last root hint is always picked for priming2019-09-25T14:02:05+02:00Štěpán Balážiklast root hint is always picked for primingThis is normally the `m` root server via IPv6.
Closely related to #447This is normally the `m` root server via IPv6.
Closely related to #447https://gitlab.nic.cz/knot/knot-resolver/-/issues/498Quickstart guide: internal resolver2019-12-23T20:08:37+01:00Petr ŠpačekQuickstart guide: internal resolverOutline for scenario "internal resolver":
- package installation (common to all quickstart guides)
- listening on network interfaces
- policy to resolve internal-only domains (e.g. `company.example` domain which is not available on the p...Outline for scenario "internal resolver":
- package installation (common to all quickstart guides)
- listening on network interfaces
- policy to resolve internal-only domains (e.g. `company.example` domain which is not available on the public Internet)5.0.0Aleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/499Quickstart guide: personal privacy-preserving resolver2019-12-23T20:08:36+01:00Petr ŠpačekQuickstart guide: personal privacy-preserving resolverOutline for scenario "personal privacy-preserving resolver" (localhost only):
- package installation (common to all quickstart guides)
- policy to TLS-forward queries to a trusted third parties (policy TLS_FORWARD, slicing to split queri...Outline for scenario "personal privacy-preserving resolver" (localhost only):
- package installation (common to all quickstart guides)
- policy to TLS-forward queries to a trusted third parties (policy TLS_FORWARD, slicing to split queries to multiple targets)
- moving cache to tmpfs to avoid cache writes to permanent storage5.0.0Aleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/500Quickstart guide: ISP resolver2019-12-23T20:08:35+01:00Petr ŠpačekQuickstart guide: ISP resolverOutline for scenario "ISP resolver":
- package installation (common to all quickstart guides)
- listening on network interfaces
- limiting access to clients in ISP networks (view + policy.REFUSE)
- policy to comply with mandatory domain ...Outline for scenario "ISP resolver":
- package installation (common to all quickstart guides)
- listening on network interfaces
- limiting access to clients in ISP networks (view + policy.REFUSE)
- policy to comply with mandatory domain blocking
- a) RPZ
- b) hand-made list
- configuring cache size
- running multiple instances (kresd@1, kresd@2, ...)
- monitoring - cache hit and other stats5.0.0Aleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/501Support ESNI / HTTPSSVC DNS Resource Records2019-08-17T21:10:07+02:00Gaspard d'HautefeuilleSupport ESNI / HTTPSSVC DNS Resource Records- Program: Knot Resolver, Knot DNS
- Issue type: Feature request
### Short description
The request is about supporting ESNI DNS Resource Record as mentioned in this [Internet Draft](https://tools.ietf.org/html/draft-ietf-tls-esni-04#s...- Program: Knot Resolver, Knot DNS
- Issue type: Feature request
### Short description
The request is about supporting ESNI DNS Resource Record as mentioned in this [Internet Draft](https://tools.ietf.org/html/draft-ietf-tls-esni-04#section-4.1).
### Usecase
The current bad scenario is you use the latest version of Firefox, you enable ESNI and DOH on a Trusted Recursive Resolver such as 1.1.1.1 provided by Cloudflare and they do the ESNI work for you.
I am currently using DNS over TLS with Knot Resolver and want to use ESNI on any domain name.
### Description
Knot Resolver and Knot DNS should support ESNI DNS Resource Record.https://gitlab.nic.cz/knot/knot-resolver/-/issues/502[discussion] split DoH into separate process/proxy?2020-09-18T14:22:01+02:00Petr Špaček[discussion] split DoH into separate process/proxy?Current DoH implementation is causing hard-to-debug problems like #486 or #465 . Given nature of DoH, it might be better to split it off into separate process.
Options:
- keep DoH inside Knot Resolver
- hard-written server - does not so...Current DoH implementation is causing hard-to-debug problems like #486 or #465 . Given nature of DoH, it might be better to split it off into separate process.
Options:
- keep DoH inside Knot Resolver
- hard-written server - does not sound like a good idea
- plugin for one of existing HTTP servers
- server with FastCGI support usable with any compliant HTTP server
- stand-alone proxy like https://www.envoyproxy.io/ or something similar
We need to eventually decide which way to go.https://gitlab.nic.cz/knot/knot-resolver/-/issues/505net.tls not working when certificates are outside of /etc/knot-resolver/: GNU...2019-09-03T18:51:35+02:00Gaspard d'Hautefeuillenet.tls not working when certificates are outside of /etc/knot-resolver/: GNUTLS_E_FILE_ERROR```bash
sept. 03 17:03:49 myhost.example.com systemd[1]: Starting Knot Resolver daemon...
sept. 03 17:03:49 myhost.example.com kresd[1452458]: [tls] gnutls_certificate_set_x509_key_file(/etc/letsencrypt/live/myhost.example.com/fullchai...```bash
sept. 03 17:03:49 myhost.example.com systemd[1]: Starting Knot Resolver daemon...
sept. 03 17:03:49 myhost.example.com kresd[1452458]: [tls] gnutls_certificate_set_x509_key_file(/etc/letsencrypt/live/myhost.example.com/fullchain.pem,/etc/letsencrypt/live/myhost.example.com/privkey.pem) failed: -64 (GNUTLS_E_FILE_ERROR)
sept. 03 17:03:49 myhost.example.com kresd[1452458]: error occured here (config filename:lineno is at the bottom, if config is involved):
sept. 03 17:03:49 myhost.example.com kresd[1452458]: stack traceback:
sept. 03 17:03:49 myhost.example.com kresd[1452458]: [C]: in function 'tls'
sept. 03 17:03:49 myhost.example.com kresd[1452458]: /etc/knot-resolver/kresd.conf:27: in main chunk
sept. 03 17:03:49 myhost.example.com kresd[1452458]: ERROR: Invalid argument
sept. 03 17:03:49 myhost.example.com kresd[1452458]: [ ta ] warning: . DNSKEY is missing the SEP bit; flags 256 instead of 257
sept. 03 17:03:49 myhost.example.com systemd[1]: kresd@1.service: Main process exited, code=exited, status=1/FAILURE
sept. 03 17:03:49 myhost.example.com systemd[1]: kresd@1.service: Failed with result 'exit-code'.
sept. 03 17:03:49 myhost.example.com systemd[1]: Failed to start Knot Resolver daemon.
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/506kres_modules/prefill.lua:186: [prefill] configuration must be in table {owner...2019-09-20T10:46:34+02:00Gaspard d'Hautefeuillekres_modules/prefill.lua:186: [prefill] configuration must be in table {owner name = {per-zone config}}Hi,
I am getting this issue with:
```bash
prefill.config({['.'] = {url = 'https://www.internic.net/domain/root.zone', ca_file = '/etc/ssl/certs/ca-certificates.crt', interval = 86400}})
```
I have the following modules:
```bash
module...Hi,
I am getting this issue with:
```bash
prefill.config({['.'] = {url = 'https://www.internic.net/domain/root.zone', ca_file = '/etc/ssl/certs/ca-certificates.crt', interval = 86400}})
```
I have the following modules:
```bash
modules = {
'policy',
'view',
'hints',
'prefill',
'serve_stale < cache',
'workarounds < iterate',
'stats',
'predict'
}
```
But with the prefill.config, I get:
'kres_modules/prefill.lua:186: [prefill] configuration must be in table {owner name = {per-zone config}}'
And yet, it is the one provided in https://knot-resolver.readthedocs.io/en/stable/modules.html#cache-prefilling.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/507Setup kresd as a service on FreeBSD2019-09-07T17:46:06+02:00Sascha L.Setup kresd as a service on FreeBSDHello,
I setup knot-resolver on FreeBSD (FreeNAS to be exact - in a jail).
When i execute `kresd -c /etc/knot-resolver/kresd.conf` it works without any flaws. Unfortunately the daemon is the bound to the SSH connection => resulting ...Hello,
I setup knot-resolver on FreeBSD (FreeNAS to be exact - in a jail).
When i execute `kresd -c /etc/knot-resolver/kresd.conf` it works without any flaws. Unfortunately the daemon is the bound to the SSH connection => resulting in stopping if I close the connection.
I then tried to setup kresd as a service (I created a file named `knotresolver` in `/usr/local/etc/rc.d`). These are my attempts:
```bash
#!/bin/sh
#
# PROVIDE: knotresolver
# REQUIRE: networking
# KEYWORD:
. /etc/rc.subr
name="knotresolver"
rcvar="knotresolver_enable"
knotresolver_user="root"
knotresolver_command="/usr/local/sbin/kresd -c /etc/knot-resolver/kresd.conf"
pidfile="/var/run/${name}.pid"
command="/usr/sbin/daemon"
command_args="-P ${pidfile} -r -u ${knotresolver_user} -o /var/log/knotresolver.log -f ${knotresolver_command}"
load_rc_config $name
: ${knotresolver_enable:=no}
run_rc_command "$1"
```
Second attempt:
```bash
#!/bin/sh
#
# PROVIDE: knotresolver
# REQUIRE: networking
# KEYWORD:
. /etc/rc.subr
name="knotresolver"
rcvar="knotresolver_enable"
knotresolver_user="root"
pidfile="/var/run/${name}.pid"
command="/usr/local/sbin/kresd -c /etc/knot-resolver/kresd.conf"
load_rc_config $name
: ${knotresolver_enable:=no}
run_rc_command "$1"
```
Upon trying to start the service both attempts resulted in the following error:
```
knotresolver does not exist in /etc/rc.d or the local startup
directories (/usr/local/etc/rc.d), or is not executable
```
So my question is: Is there someone who can point out an obvious mistake I made or who already setup kresd on FreeBSD (FreeNAS)?
*Note: I have root privileges, so it isn't a user rights issue!*
Cheers!https://gitlab.nic.cz/knot/knot-resolver/-/issues/508error starting kresd (new install): network socket kind 'kresd.socket' not ha...2019-09-16T09:29:56+02:00Ghost Usererror starting kresd (new install): network socket kind 'kresd.socket' not handled when opening ...new install version 4.2.0-1
**install procedure:**
```
echo 'deb http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_10/ /' > /etc/apt/sources.list.d/home:CZ-NIC:knot-resolver-latest.list
wget -nv ht...new install version 4.2.0-1
**install procedure:**
```
echo 'deb http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_10/ /' > /etc/apt/sources.list.d/home:CZ-NIC:knot-resolver-latest.list
wget -nv https://download.opensuse.org/repositories/home:CZ-NIC:knot-resolver-latest/Debian_10/ Release.key -O Release.key
apt-key add - < Release.key
apt-get update
apt-get install knot-resolver
apt-get install knot-resolver-module-http
```
**kresd.conf:**
```
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'http',
}
-- Cache size
cache.size = 100 * MB
net.listen('192.168.2.57', 8453, { kind = 'webmgmt' }) -- see http module
modules.load('prefill')
prefill.config({
['.'] = {
url = 'https://www.internic.net/domain/root.zone',
ca_file = '/etc/ssl/certs/ca-certificates.crt',
interval = 86400 -- seconds
}
})
hints.root({
['i.root-servers.net.'] = { '2001:7fe::53', '192.36.148.17' }
})
```
**kresd.socket:**
```
[Unit]
Description=Knot Resolver network listeners
Documentation=man:kresd.systemd(7)
Documentation=man:kresd(8)
Before=sockets.target
[Socket]
FreeBind=true
ListenDatagram=[fdaa:bbcc:ddee:2::5555]:5555
ListenStream=[fdaa:bbcc:ddee:2::5555]:5555
ListenDatagram=127.10.10.5:5555
ListenStream=127.10.10.5:5555
Service=kresd@1.service
Slice=system-kresd.slice
[Install]
WantedBy=sockets.target
```
**error:**
```
Starting Knot Resolver daemon...
**warning: network socket kind 'kresd.socket' not handled when opening '127.10.10.5#5555'. Likely causes: typo or not loading 'http' module.**
**warning: network socket kind 'kresd.socket' not handled when opening 'fdaa:bbcc:ddee:2::5555#5555'. Likely causes: typo or not loading 'http' module**
[http] created new ephemeral TLS certificate
Started Knot Resolver daemon.
[prefill] root zone file valid for 23 hours 35 minutes, reusing data from disk
[prefill] root zone successfully parsed, import started
[prefill] root zone refresh in 23 hours 35 minutes
```
Since the http module is loaded (see kresd.conf) it is suppose to be a typo, but I don't see anything wrong. Also tried the original config, using 127.0.0.1 and :: as IP addresses, no result. Really can't find the problem, thus asking if somethings wrong...https://gitlab.nic.cz/knot/knot-resolver/-/issues/509bogus key: bad keys, broken trust chain2019-09-13T17:23:03+02:00MartBbogus key: bad keys, broken trust chainHey i got the following log excerpt while trying to resolve postbank.de (a decently sized german bank)
```
Sep 13 14:44:16 netmgmt kresd[11685]: [gnutls] (5) REC[0x558a3d6140]: Decrypted Packet[23] Application Data(23) with length: 1738...Hey i got the following log excerpt while trying to resolve postbank.de (a decently sized german bank)
```
Sep 13 14:44:16 netmgmt kresd[11685]: [gnutls] (5) REC[0x558a3d6140]: Decrypted Packet[23] Application Data(23) with length: 1738
Sep 13 14:44:16 netmgmt kresd[11685]: [gnutls] (3) ASSERT: buffers.c[_gnutls_io_read_buffered]:589
Sep 13 14:44:16 netmgmt kresd[11685]: [gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1777
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][iter] <= rcode: NOERROR
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][resl] <= server: '1.1.1.1' rtt: 21 ms
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][resl] => resuming yielded answer
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][vldr] >< bogus key: postbank.de. DNSKEY (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][vldr] <= bad keys, broken trust chain
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][cach] => not overwriting DNSKEY postbank.de.
Sep 13 14:44:16 netmgmt kresd[11685]: [14255.14][resl] finished: 0, queries: 4, mempool: 49200 B
```
Im forwarding my queries using TLS_FORWARD to 1.1.1.1 a server which i had no issues with previously.
What do i need to look for here, are my local keys invalid?https://gitlab.nic.cz/knot/knot-resolver/-/issues/510prometheus and graphite metrics are missing some cache stats2020-08-06T11:40:28+02:00Vladimír Čunátvladimir.cunat@nic.czprometheus and graphite metrics are missing some cache statsFrom cache it only exports [`cache.stats()`](https://knot-resolver.readthedocs.io/en/stable/daemon.html#c.cache.stats), but that only counts operations. Ideas:
- [x] `cache.count()` is quite an important number
- [x] utilized LMDB siz...From cache it only exports [`cache.stats()`](https://knot-resolver.readthedocs.io/en/stable/daemon.html#c.cache.stats), but that only counts operations. Ideas:
- [x] `cache.count()` is quite an important number
- [x] utilized LMDB size in bytes might be even more interesting (computed by GC anyway), and could be paired to `cache.current_size`
_Reported [on gitter](https://gitter.im/CZ-NIC/knot-resolver?at=5d7bd29d5d40aa0d7d2e7725)._https://gitlab.nic.cz/knot/knot-resolver/-/issues/512prefill: deadlock issue2019-12-20T14:32:34+01:00Vladimír Čunátvladimir.cunat@nic.czprefill: deadlock issueThe https download of the (root) zone is blocking and it uses OS DNS. That combination will dead-lock e.g. in the case when kresd is the (only) resolver for the OS on which it runs. _Originally discovered in #506._
Plan &ndash; implem...The https download of the (root) zone is blocking and it uses OS DNS. That combination will dead-lock e.g. in the case when kresd is the (only) resolver for the OS on which it runs. _Originally discovered in #506._
Plan – implementation details: I expect we'd better convert the fetch to use `lua-http` library, as it's asynchronous and has a relatively [convenient API for this](https://daurnimator.github.io/lua-http/0.3/#retrieving-a-document).https://gitlab.nic.cz/knot/knot-resolver/-/issues/513rpm: change permission on config directory to read-only2019-11-20T10:53:05+01:00Tomas Krizekrpm: change permission on config directory to read-onlyRPM package uses RFC 5011 to update DNSSEC TA in `/etc/knot-resolver/root.keys`. This requires the `/etc/knot-resolver/` config directory to be writable by `knot-resolver` user.
Possible solutions:
- disable RFC 5011, make `/etc/knot-re...RPM package uses RFC 5011 to update DNSSEC TA in `/etc/knot-resolver/root.keys`. This requires the `/etc/knot-resolver/` config directory to be writable by `knot-resolver` user.
Possible solutions:
- disable RFC 5011, make `/etc/knot-resolver/root.keys` read-only. This would require a package update when TAs are rolled over.
- move the TA file to a more appropriate location, e.g. `/var/lib/knot-resolver/root.keys`4.3.0Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/514Segfault on 4.2.1 on armv72019-10-04T08:52:06+02:00Jonathan CoetzeeSegfault on 4.2.1 on armv7I updated my personal knot-based container from 4.2.0 to 4.2.1 and it now fails to start up. Running `kresd` manually from inside the container shows that it's segfaulting on startup ([core](/uploads/daae8f89a99a18c67ce5e2ab92e9b518/core...I updated my personal knot-based container from 4.2.0 to 4.2.1 and it now fails to start up. Running `kresd` manually from inside the container shows that it's segfaulting on startup ([core](/uploads/daae8f89a99a18c67ce5e2ab92e9b518/core)). Turning on verbose logging doesn't seem to reveal anything. This is on my RPi 4 running up-to-date Raspbian Buster (image runs without error on my MacBook Pro). If you have an armv7 environment to test with I've pushed two tags `jonocoetzee/private-dns:v4.2.0` and `jonocoetzee/private-dns:v4.2.1` ([repo](https://gitlab.com/jonocoetzee/private-dns)). Let me know if there's any other info you need.4.2.2Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/515Cert should be read as root before droping privileges2019-09-30T12:05:34+02:00Ngô HuyCert should be read as root before droping privilegesHi team,
Thank for greate software, I use knot-resolver on my server. I have the problem with certficate owned by root, it returned `Error -64` with gnutls because the certs are not readable by other user.
As nginx, it reads certs before...Hi team,
Thank for greate software, I use knot-resolver on my server. I have the problem with certficate owned by root, it returned `Error -64` with gnutls because the certs are not readable by other user.
As nginx, it reads certs before dropping privileges, knot-resolver should follow it.
Best regards,
Severushttps://gitlab.nic.cz/knot/knot-resolver/-/issues/516Feature request to add a new option to meson.build2020-08-19T13:29:10+02:00Jan PavlinecFeature request to add a new option to meson.buildIt would be nice to have a meson option to change "knot-resolver" to something else.
see https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/meson.build#L46It would be nice to have a meson option to change "knot-resolver" to something else.
see https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/meson.build#L46https://gitlab.nic.cz/knot/knot-resolver/-/issues/518RRset merge operation is too slow for big RRsets2020-06-17T10:46:32+02:00Petr ŠpačekRRset merge operation is too slow for big RRsetsQuery `cmts1-dhcp.longlines.com. A` takes more than 10 seconds, probably because some $`\mathcal{O}(n^2)`$ complexity in `kr_ranked_rrarray_add()` or something like that.
Such query can be used to DoS resolver so we have to fix that.Query `cmts1-dhcp.longlines.com. A` takes more than 10 seconds, probably because some $`\mathcal{O}(n^2)`$ complexity in `kr_ranked_rrarray_add()` or something like that.
Such query can be used to DoS resolver so we have to fix that.4.3.0Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/519ERROR: IDNA (string start/ends with forbidden hyphen)2019-11-08T05:37:30+01:00Ghost UserERROR: IDNA (string start/ends with forbidden hyphen)`kdig -d @ip.ip.ip.ip +tls-ca +tls-host=dot.dns.host.ex.dns.io A +short $(cat 'list.txt') >> 'IPs.txt'`
`;; ERROR: IDNA (string start/ends with forbidden hyphen)`
`;; ERROR: invalid parameter: 0a-api-4-.btcc.com`
kdig (Knot DNS), ver...`kdig -d @ip.ip.ip.ip +tls-ca +tls-host=dot.dns.host.ex.dns.io A +short $(cat 'list.txt') >> 'IPs.txt'`
`;; ERROR: IDNA (string start/ends with forbidden hyphen)`
`;; ERROR: invalid parameter: 0a-api-4-.btcc.com`
kdig (Knot DNS), version 2.9.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/520prefill: remove depedency on lua-filesystem (lfs)2019-12-18T16:20:43+01:00Petr Špačekprefill: remove depedency on lua-filesystem (lfs)Package lua-filesystem (library lfs) in version for Lua 5.1 is not available in RHEL 8, and we in fact need only one little function from it. Let's get rid of the dependency.
Proposed approach:
- [x] add small C function to get value eq...Package lua-filesystem (library lfs) in version for Lua 5.1 is not available in RHEL 8, and we in fact need only one little function from it. Let's get rid of the dependency.
Proposed approach:
- [x] add small C function to get value equivalent to `lfs.attributes(filenamename).modification`
to our auxiliary library lib/utils.c
- [x] replace lfs library in `modules/prefill.lua` with call to `ffi.C.our_new_func()`
- [x] remove lua-filesystem references from packaging files5.0.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/521replace lua-socket depedency with lua-http2019-12-20T14:32:36+01:00Petr Špačekreplace lua-socket depedency with lua-httpAt the moment we are using two packages for HTTP requests from Lua:
- lua-socket (Lua library `ssl.https`)
- lua-http (Lua library `http`)
This complicates packaging and is generally unnecessary.
It seems that package lua-socket (Lua l...At the moment we are using two packages for HTTP requests from Lua:
- lua-socket (Lua library `ssl.https`)
- lua-http (Lua library `http`)
This complicates packaging and is generally unnecessary.
It seems that package lua-socket (Lua library `ssl.https`) offers only blocking API, and that is causing problems like e.g. #512, so let's replace `lua-socket` with `lua-http`.
It should "accidentally" fix #512 and also make packaging easier.
Affected modules:
- prefill (#512)
- trust_anchors bootstrap
- possibly others
Example of a non-blocking HTTP request:
```
function blacklist_reload()
local url = 'https://raw.githubusercontent.com/CSNOG/MFCR-blacklist/master/blacklist.txt'
local headers, stream = http_request.new_from_uri(uri):go()
assert(headers, 'HTTP client library error')
assert(tonumber(headers:get(':status')) == 200,
string.format('HTTP status %s instead of expected 200\n', headers:get(':status')))
local tmpfile = stream:get_body_as_file(5)
assert(tmpfile, 'error while getting blacklist HTTP body in limit 5 seconds')
end
worker.bg_worker.cq:wrap(blacklist_reload)
```
Error handling needs more work etc.5.0.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/522Walled Garden capability?2019-11-29T12:19:12+01:00Mr. Blue CoatWalled Garden capability?Sorry if this has been asked before but I haven't found any documentation or reference: Can knot-resolver act as a walled garden? My specific use case is to limit access to to a limited whitelist of sites (walled garden) during work hour...Sorry if this has been asked before but I haven't found any documentation or reference: Can knot-resolver act as a walled garden? My specific use case is to limit access to to a limited whitelist of sites (walled garden) during work hours and then regular OpenDNS after hours.https://gitlab.nic.cz/knot/knot-resolver/-/issues/523The growing phenomenon of CNAME cloaked trackers2019-12-18T19:15:02+01:00Ghost UserThe growing phenomenon of CNAME cloaked trackersResolvers must include incoming cname filtering... [[more info here](https://medium.com/nextdns/cname-cloaking-the-dangerous-disguise-of-third-party-trackers-195205dc522a)]
uBlock Origin for Firefox addresses new first-party tracking me...Resolvers must include incoming cname filtering... [[more info here](https://medium.com/nextdns/cname-cloaking-the-dangerous-disguise-of-third-party-trackers-195205dc522a)]
uBlock Origin for Firefox addresses new first-party tracking method [CNAME cloaking]
https://www.ghacks.net/2019/11/20/ublock-origin-for-firefox-addresses-new-first-party-tracking-method/https://gitlab.nic.cz/knot/knot-resolver/-/issues/524cache size = "max"2020-01-29T17:07:42+01:00Petr Špačekcache size = "max"For use-cases where cache is on a dedicated partition (like RAM disk) it would be useful to have option "cache size = max".
In that case kresd would allocate as big cache as possible, obviating need to specify cache size in two differen...For use-cases where cache is on a dedicated partition (like RAM disk) it would be useful to have option "cache size = max".
In that case kresd would allocate as big cache as possible, obviating need to specify cache size in two different places - kresd config + RAM disk spec.https://gitlab.nic.cz/knot/knot-resolver/-/issues/525cache preallocation2020-01-13T09:28:49+01:00Petr Špačekcache preallocationWe should consider pre-allocating cache file when cache size is being changed. That would eliminate problems like #197.We should consider pre-allocating cache file when cache size is being changed. That would eliminate problems like #197.https://gitlab.nic.cz/knot/knot-resolver/-/issues/526cache madvise - random access?2019-12-02T09:38:13+01:00Petr Špačekcache madvise - random access?Investigate impact of `madvise(MADV_RANDOM)` on mmaped cache.
See http://man7.org/linux/man-pages/man2/madvise.2.htmlInvestigate impact of `madvise(MADV_RANDOM)` on mmaped cache.
See http://man7.org/linux/man-pages/man2/madvise.2.htmlhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/527fine-grained logging2021-07-29T14:01:56+02:00Petr Špačekfine-grained loggingCurrent logging configuration is just one bit: verbosity on/off. This makes it hard to monitor and debug large instances.
Let's collect ideas for improvement in this ticket:
- [x] per-request logging - ability to run single request with...Current logging configuration is just one bit: verbosity on/off. This makes it hard to monitor and debug large instances.
Let's collect ideas for improvement in this ticket:
- [x] per-request logging - ability to run single request with verbose logging is very handy for debugging. We have a prototype in `/trace` endpoint in HTTP module but this module does not log everything for a given request, and also it should be much easier to use than full HTTP.
- [x] per-type logging - it might be handy to enable/disable certain types of logging, e.g. control socket log might be too noisy if there is enough API traffic etc.
- [x] fine grained log levels exposed to the logging system - external log collectors need to know if given message is debug/info/error etc.
- [ ] structured logging? log some rudimentary metadata in structured form - e.g. query name + type + rcode? This might be very handy for network operations centers etc.https://gitlab.nic.cz/knot/knot-resolver/-/issues/528control socket logging is too noisy2019-12-11T10:48:23+01:00Petr Špačekcontrol socket logging is too noisyOn busy systems the control socket is too noisy.
Originally I thought it is a "security"/"audit" feature that all the traffic will get into kresd logs, but it turns out that some users use the API heavily and this leads to voluminous an...On busy systems the control socket is too noisy.
Originally I thought it is a "security"/"audit" feature that all the traffic will get into kresd logs, but it turns out that some users use the API heavily and this leads to voluminous and at the same time useless log.
Proposal:
Log socket communication only if verbose mode is enabled.
Better solution would be fine-grained logging configuration (#527) but that's out out of scope of this ticket.5.0.0Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/529deprecate -f option (forking)2020-10-23T17:01:53+02:00Petr Špačekdeprecate -f option (forking)Current option `-f` allows users to run multiple kresd instances at once. That is good in theory has many shortcomings, namely:
- parent kresd process does not monitor child processes
- processes cannot be re/started without breaking kre...Current option `-f` allows users to run multiple kresd instances at once. That is good in theory has many shortcomings, namely:
- parent kresd process does not monitor child processes
- processes cannot be re/started without breaking kresd inter-process communication
- current implementation of `map()` command has convoluted error handling (and is error prone)
- it is harder to monitor processes from outside, e.g. using standard tools in systemd
- forking and file descriptor passing between instances is a mess
Fixing these shortcomings is of course possible, but it ultimately leads to re-implementation of supervisor process, which is even worse idea. We already have systemd/supervisord/procd for this task so let's rely on that instead of re-inventing wheel.
Proposal:
- deprecate `-f` option and `map()` command in 5.0.0
- add warning if `-f` or `map()` are used
- add new option `-n`/`--noninteractive` which should be used instead of `-f 1`
- update manuals to use new option `-n` + systemd instances to run multiple processes
Wild idea:
- update `map()` command to use sockets in `rundir` instead of FDs inherited from parent? Is it worth the complexity?5.0.0Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/530DNSSEC validation failure db.ripe.net. DS2020-01-31T12:17:09+01:00Tore AndersonDNSSEC validation failure db.ripe.net. DSMy local Knot Resolver instance seems to be unable to look up `apps.db.ripe.net`, logging `kresd[1241]: DNSSEC validation failure db.ripe.net. DS` for each attempt.
https://dnsviz.net/d/apps.db.ripe.net/dnssec/ and https://dnssec-debugg...My local Knot Resolver instance seems to be unable to look up `apps.db.ripe.net`, logging `kresd[1241]: DNSSEC validation failure db.ripe.net. DS` for each attempt.
https://dnsviz.net/d/apps.db.ripe.net/dnssec/ and https://dnssec-debugger.verisignlabs.com/apps.db.ripe.net seems to think that things are in order, so does the upstream recursive resolver (running ISC BIND with validation enabled):
```
$ dig @87.238.33.1 apps.db.ripe.net ✔
; <<>> DiG 9.11.13-RedHat-9.11.13-3.fc31 <<>> @87.238.33.1 apps.db.ripe.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8271
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;apps.db.ripe.net. IN A
;; ANSWER SECTION:
apps.db.ripe.net. 21260 IN A 193.0.6.142
;; Query time: 2 msec
;; SERVER: 87.238.33.1#53(87.238.33.1)
;; WHEN: fr. des. 13 14:38:55 CET 2019
;; MSG SIZE rcvd: 61
```
I've tried restarting `kresd` and clearing the cache, no go.
`/etc/knot-resolver/kresd.conf` contains:
```
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: http://knot-resolver.readthedocs.org/en/stable/daemon.html#configuration
-- Network interface configuration: see kresd.systemd(7)
-- For DNS-over-HTTPS and web management when using http module
-- modules.load('http')
-- http.config({
-- cert = '/etc/knot-resolver/mycert.crt',
-- key = '/etc/knot-resolver/mykey.key',
-- tls = true,
-- })
-- To disable DNSSEC validation, uncomment the following line (not recommended)
-- trust_anchors.remove('.')
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
trust_anchors.set_insecure({'lan'})
policy.add(policy.suffix(policy.FORWARD('192.168.1.1'),{todname('lan')}))
policy.add(policy.all(policy.FORWARD('87.238.33.1')))
modules.load('bogus_log')
```
I'm using the Knot Resolver binaries included with Fedora 31 (`knot-resolver-4.2.2-2.fc31.x86_64`). I'm aware it's not the last available release, however, there's no mention of any bug related to DNSSEC validation being fixed in v4.3.0 at https://gitlab.labs.nic.cz/knot/knot-resolver/-/tagsVladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/531kresd[]: /etc/knot-resolver/kresd.conf:30: attempt to call field 'ttl' (a nil...2019-12-18T13:33:42+01:00Jan Škaloudkresd[]: /etc/knot-resolver/kresd.conf:30: attempt to call field 'ttl' (a nil value)kresd@1.service wont start on ubuntu 18.04 package knot-resolver 4.3.0-1 with config option "hints.ttl(3600)" with error:
kresd[]: /etc/knot-resolver/kresd.conf:30: attempt to call field 'ttl' (a nil value)kresd@1.service wont start on ubuntu 18.04 package knot-resolver 4.3.0-1 with config option "hints.ttl(3600)" with error:
kresd[]: /etc/knot-resolver/kresd.conf:30: attempt to call field 'ttl' (a nil value)https://gitlab.nic.cz/knot/knot-resolver/-/issues/533AF_XDP optimization2020-11-16T09:37:50+01:00Petr ŠpačekAF_XDP optimizationExplore and implement prototype of AF_XDP network stack optimization.Explore and implement prototype of AF_XDP network stack optimization.2020 Q2Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/536declarative config - experiments with sysrepo2020-10-09T19:18:24+02:00Petr Špačekdeclarative config - experiments with sysrepoProblem statement
-----------------
Current configuration is practically a Lua program, which is a nightmare for multiple reasons:
- non-programmers have hard time understanding what is going on
- Lua language makes it hard to detect mis...Problem statement
-----------------
Current configuration is practically a Lua program, which is a nightmare for multiple reasons:
- non-programmers have hard time understanding what is going on
- Lua language makes it hard to detect mistakes in config
- run-time reconfiguration requires doing each change N times for N processes
- gathering statistics from multiple processes is total pain
- currently it exposes low-level stuff and it prone to crashes on invalid use (#182)
Experiment
----------
Do some preliminary experiments with sysrepo and see if it improves situation sufficiently to make it worth investing into it more.
Objectives
----------
- experimental mode - sysrepo **must not** become a hard dependency of Knot Resolver
- put as much code as possible into separate (and optional) module
- minimize code duplication
- current Lua config must work the same way as it did before, sysrepo only complements the Lua config
Once we have sufficient experience with implementation of sysrepo into kresd we will revisit pros and cons and decide what to do next.
Requirements for next stages
----------------------------
- sysrepo+libyang must be widely available in distros we care about
- sysrepo+libyang must be sufficiently stable
- sysrepo must allow us to build a new user interface with a reasonable complexity
Ideas to try
------------
- [x] build module to translate sysrepo callbacks to Lua config calls
- [x] build command line client which can display and edit declarative config in a text format (probably YAML to make it similar to Knot DNS)2020 Q3https://gitlab.nic.cz/knot/knot-resolver/-/issues/538lower default EDNS buffer size to 12322020-10-26T15:18:36+01:00Petr Špačeklower default EDNS buffer size to 1232Default needs to follow https://dnsflagday.net/2020/
This should prevent #300 from happening.Default needs to follow https://dnsflagday.net/2020/
This should prevent #300 from happening.https://gitlab.nic.cz/knot/knot-resolver/-/issues/540redirecting a name elsewhere using user-supplied CNAME (Google SafeSearch)2019-12-20T13:07:17+01:00Mr. Blue Coatredirecting a name elsewhere using user-supplied CNAME (Google SafeSearch)Sorry if this is obvious but I couldn't find any info on how to enable Google SafeSearch with Knot-Resolver using CNAME: https://www.leowkahman.com/2017/09/11/enforce-safe-search-on-google-youtube-bing/Sorry if this is obvious but I couldn't find any info on how to enable Google SafeSearch with Knot-Resolver using CNAME: https://www.leowkahman.com/2017/09/11/enforce-safe-search-on-google-youtube-bing/https://gitlab.nic.cz/knot/knot-resolver/-/issues/541CI: optimize packaging tests2020-05-27T10:54:30+02:00Petr ŠpačekCI: optimize packaging testsPackaging tests merged in !892 do their job, but it is too slow for automated run on every commit.
Ideas for improvement:
- [ ] use py.test framework
- [ ] use own image cache instead of implicit and imperfecr Docker build cache
- [ ] e...Packaging tests merged in !892 do their job, but it is too slow for automated run on every commit.
Ideas for improvement:
- [ ] use py.test framework
- [ ] use own image cache instead of implicit and imperfecr Docker build cache
- [ ] explicitly split base-image preparation from test itself
@tkrizek has even more ideas.https://gitlab.nic.cz/knot/knot-resolver/-/issues/542[tls_client] session resumption does not work properly2022-02-18T11:53:56+01:00Vladimír Čunátvladimir.cunat@nic.cz[tls_client] session resumption does not work properlyIt doesn't break handshake but resumption never happens. Maybe it's broken just on TLS 1.3, or some similar condition. I tried this with quad-{1,8,9} and it looks the same in verbose log.
We do receive resumption tickets from upstream...It doesn't break handshake but resumption never happens. Maybe it's broken just on TLS 1.3, or some similar condition. I tried this with quad-{1,8,9} and it looks the same in verbose log.
We do receive resumption tickets from upstream
```
[gnutls] (4) HSK[0x1644310]: NEW SESSION TICKET (4) was received. Length 246[496], frag offset 0, frag length: 246, sequence: 0
```
but never send it on on re-connection (no idea why so far)
```
[gnutls] (4) EXT[0x1644310]: Preparing extension (Session Ticket/35) for 'client hello'
[gnutls] (4) EXT[0x1644310]: Sending extension Session Ticket/35 (0 bytes)
```
and thus the session can't resume.
```
[tls_client] TLS session has not resumed
```
_Tested with latest releases: Knot Resolver 4.3.0 and GnuTLS 3.6.11.1._https://gitlab.nic.cz/knot/knot-resolver/-/issues/543kres-cache-gc.service has wrong cache path2020-02-03T14:38:55+01:00Jean-Danielkres-cache-gc.service has wrong cache pathVersion: 5.0.0
The distributed kres-cache-gc.service file specifies the wrong path for the cache directory, making it pointless.
The default cache path is `/var/cache/knot-resolver` and it uses `/var/lib/knot-resolver`, resulting in th...Version: 5.0.0
The distributed kres-cache-gc.service file specifies the wrong path for the cache directory, making it pointless.
The default cache path is `/var/cache/knot-resolver` and it uses `/var/lib/knot-resolver`, resulting in the log being cluttered by `Error: /var/lib/knot-resolver does not exist or is not a LMDB` messages.
For the record, this is the installed file:
```
[Unit]
Description=Knot Resolver Garbage Collector daemon
Documentation=man:kresd.systemd(7)
Documentation=man:kresd(8)
[Service]
Type=simple
ExecStart=/usr/sbin/kres-cache-gc -c /var/lib/knot-resolver -d 1000
User=knot-resolver
Group=knot-resolver
Restart=on-failure
RestartSec=30
StartLimitInterval=400
StartLimitBurst=10
Slice=system-kresd.slice
[Install]
WantedBy=kresd.target
```
Env: Ubuntu 18.04 with apt source: https://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_18.04/ /
https://gitlab.nic.cz/knot/knot-resolver/-/issues/544Debian Repo2020-02-05T16:15:28+01:00Magnus FrühlingDebian RepoHi there,
is there any plan for a Debian repo again?
similar to `deb https://deb.knot-dns.cz/knot-resolver/ stretch main` back than...Hi there,
is there any plan for a Debian repo again?
similar to `deb https://deb.knot-dns.cz/knot-resolver/ stretch main` back than...https://gitlab.nic.cz/knot/knot-resolver/-/issues/545LUA error blocks http server2020-02-28T09:43:09+01:00analogicLUA error blocks http server```
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr...```
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr/share/lua/5.1/http/h2_stream.lua:565: in function 'handler'
/usr/share/lua/5.1/http/h2_connection.lua:204: in function 'handle_frame'
/usr/share/lua/5.1/http/h2_connection.lua:243: in function 'step'
/usr/share/lua/5.1/http/h2_connection.lua:352: in function 'get_next_incoming_stream'
/usr/share/lua/5.1/http/server.lua:147: in function </usr/share/lua/5.1/http/server.lua:127>
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr/share/lua/5.1/http/h2_stream.lua:565: in function 'handler'
/usr/share/lua/5.1/http/h2_connection.lua:204: in function 'handle_frame'
/usr/share/lua/5.1/http/h2_connection.lua:243: in function 'step'
/usr/share/lua/5.1/http/h2_connection.lua:352: in function 'get_next_incoming_stream'
/usr/share/lua/5.1/http/server.lua:147: in function </usr/share/lua/5.1/http/server.lua:127>
```
I am trying to setup Lets encrypt+clientCA with traefik in front of webmgmt of knot resolver. It works for first two browser opened sessions and then these errors appear and web server gets blocked...https://gitlab.nic.cz/knot/knot-resolver/-/issues/546[webmgmt] Use javascript secure scheme detection instead of server detection2020-02-10T12:38:28+01:00analogic[webmgmt] Use javascript secure scheme detection instead of server detectionPlease see:
https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/http/static/kresd.js#L335
This wont work with latest browsers when we are using reverse proxy (with auth) like this:
`(client) -> https://company.com/mgmt (...Please see:
https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/http/static/kresd.js#L335
This wont work with latest browsers when we are using reverse proxy (with auth) like this:
`(client) -> https://company.com/mgmt (reverse-proxy) -> http://resolver:8453`
result is: `ws://...` which gets blocked because it is not safe to use on https
Better should be something like this, which get evaluated entirely by browser:
```
var wsStats = ('https:' == document.location.protocol ? 'wss://' : 'ws://') + location.host + '/stats';
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/547SERVFAIL when VPN active2020-03-02T10:59:40+01:00Leonardo Brondani SchenkelSERVFAIL when VPN activeKnot Resolver, version 3.2.1, shipped with TurrisOS 4.0.5.
When I'm not using any VPN, all domains resolve. When I enable my VPN, and that substitutes my default gateway, then domains such as `bit.ly` and `storage.googleapis.com` no lon...Knot Resolver, version 3.2.1, shipped with TurrisOS 4.0.5.
When I'm not using any VPN, all domains resolve. When I enable my VPN, and that substitutes my default gateway, then domains such as `bit.ly` and `storage.googleapis.com` no longer resolve and `kresd` returns `SERVFAIL`. If I disable the VPN those domains immediately start resolving again.
I see no evidence of tampering from the VPN side, since querying via `dig @1.1.1.1` and `dig @8.8.8.8` works. And if I enable TLS forward to CloudFlare, the same behaviour persists (but only when the VPN is active).
This was not the case some time ago. I haven't changed the router configuration, and this behaviour started happening recently. I presume some recent update triggered it.
I didn't see any specific errors in the logs that could shed any light into this behaviour. I am a developer myself and fairly technical. Please let me know any particular configuration files or logs you want me to include, or any troubleshooting steps I can take. I'm a bit lost as to why this is happening and don't know how to diagnose it.
This seems to be a very similar report: https://forum.turris.cz/t/openvpn-dns-not-working-when-connected-to-protonvpn/11365https://gitlab.nic.cz/knot/knot-resolver/-/issues/549lib/knot-resolver/sandbox.lua:399: can't open cache path '.';2020-03-06T09:10:15+01:00Yaremalib/knot-resolver/sandbox.lua:399: can't open cache path '.';with version 5.0.1 on FreeBSD 12.1 I get the following when starting `kresd`
```
kresd -n -c /usr/local/etc/knot-resolver/kresd.conf /var/db/kresd
[system] error while loading config: /usr/local/lib/knot-resolver/sandbox.lua:399: can't o...with version 5.0.1 on FreeBSD 12.1 I get the following when starting `kresd`
```
kresd -n -c /usr/local/etc/knot-resolver/kresd.conf /var/db/kresd
[system] error while loading config: /usr/local/lib/knot-resolver/sandbox.lua:399: can't open cache path '.'; working directory '/var/db/kresd'; Invalid argument (workdir '/var/db/kresd')
```
the LMDB files are created as specified with `cache.size` but then kresd gives up with the above error.https://gitlab.nic.cz/knot/knot-resolver/-/issues/550LMDB error: MDB_BAD_RSLOT2020-03-06T11:21:45+01:00Jiří Helebrantjiri.helebrant@nic.czLMDB error: MDB_BAD_RSLOTI somehow managed to crash kresd while testing [adam/dns-crawler](https://gitlab.labs.nic.cz/adam/dns-crawler) (ie. a lot of queries).
end of the log:
```
[cache] MDB_BAD_TXN, probably overfull
[cache] clearing because overfull, ret = ...I somehow managed to crash kresd while testing [adam/dns-crawler](https://gitlab.labs.nic.cz/adam/dns-crawler) (ie. a lot of queries).
end of the log:
```
[cache] MDB_BAD_TXN, probably overfull
[cache] clearing because overfull, ret = -28
[cache] LMDB error: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
[cache] LMDB error: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
```
- [full kresd.log](/uploads/baa8516ff8198277ad9fc5255a24bbef/kresd.log) (with bogus_log, so probably `grep -v DNSSEC`…)
- [kresd.conf](/uploads/a9b8a558208e2b6847a88cb986dbe471/kresd.conf)
- `MDB_BAD_RSLOT` seems to come from [mdb.c#L2693](https://github.com/LMDB/lmdb/blob/LMDB_0.9.24/libraries/liblmdb/mdb.c#L2693)
- knot-resolver 5.0.1, lmdb 0.9.24, knot/libknot 2.9.2, gentoohttps://gitlab.nic.cz/knot/knot-resolver/-/issues/552Segmentation fault in stats.c2020-03-25T16:08:08+01:00MartBSegmentation fault in stats.cHey there just upgraded to fedora 32 and kresd started crashing in the following line:
https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.0.1/modules/stats/stats.c#L496
```
Program received signal SIGSEGV, Segmentation fault.
0x0...Hey there just upgraded to fedora 32 and kresd started crashing in the following line:
https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.0.1/modules/stats/stats.c#L496
```
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff6ba20fe in stats_init (module=0x5555555d6170) at ../modules/stats/stats.c:496
496 sa->sa_family = AF_UNSPEC;
(gdb) l
491 if (array_reserve(data->upstreams.q, UPSTREAMS_COUNT) != 0) {
492 return kr_error(ENOMEM);
493 }
494 for (size_t i = 0; i < UPSTREAMS_COUNT; ++i) {
495 struct sockaddr *sa = (struct sockaddr *)&data->upstreams.q.at[i];
496 sa->sa_family = AF_UNSPEC;
497 }
498 return kr_ok();
499 }
500
(gdb) print sa
$2 = (struct sockaddr *) 0x0
(gdb) print data->upstreams
$3 = {q = {at = 0x5555555efba0, len = 0, cap = 1024}, head = 0}
```
Why does sa evaluate to 0x0, did something change in regards to memory allocation for the used ring-buffer?
I can't quite figure out what's going on here, any help is appreciated.https://gitlab.nic.cz/knot/knot-resolver/-/issues/553Bugs on Daf method del2020-04-02T14:11:05+02:00realPyBugs on Daf method delThere was a mistake in the del method of the daf modules.
The id of the policy rules is not the key for delete the entry in the daf.rules array
See the attached patch
[patch](/uploads/c2a7e5bda6e23e61b747ce57fd037223/patch)There was a mistake in the del method of the daf modules.
The id of the policy rules is not the key for delete the entry in the daf.rules array
See the attached patch
[patch](/uploads/c2a7e5bda6e23e61b747ce57fd037223/patch)https://gitlab.nic.cz/knot/knot-resolver/-/issues/554Lua command map() does not work with multiple instances started using systemd2020-10-27T11:55:28+01:00Petr ŠpačekLua command map() does not work with multiple instances started using systemdThis affects all instances which do not use `-f` option (which is deprecated anyway).
We need to rewrite `map()` command to use control sockets (instead of pipes inherited from parent process) or replace it with something completely dif...This affects all instances which do not use `-f` option (which is deprecated anyway).
We need to rewrite `map()` command to use control sockets (instead of pipes inherited from parent process) or replace it with something completely different.https://gitlab.nic.cz/knot/knot-resolver/-/issues/556policy: filters that use query don't work with postules2020-04-03T15:32:48+02:00Tomas Krizekpolicy: filters that use query don't work with postules`policy.suffix` or `policy.pattern` filters don't work when policy is evaluated as a postrule, because in the finish phase, `req:current()` is nil.
One use case that doesn't work is `qname` filter with `reroute`/`rewrite` action in the ...`policy.suffix` or `policy.pattern` filters don't work when policy is evaluated as a postrule, because in the finish phase, `req:current()` is nil.
One use case that doesn't work is `qname` filter with `reroute`/`rewrite` action in the `daf` module:
```
daf.add('qname = example.com reroute 192.0.2.0/24-127.0.0.0')
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/557Resolver retransmits too early2021-01-04T11:25:14+01:00Štěpán BalážikResolver retransmits too earlySteps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 6048...Steps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 604800 86400
*. 3600 IN A 1.1.1.1
. 86400 IN NS j.root-servers.net.
j.root-servers.net. 86400 IN A 1.0.0.100
```
2. Start resolver on 1.0.0.1 with these root hints capturing the traffic to PCAP
```
. 3600000 NS J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET. 3600000 A 1.0.0.100
```
3. Query the resolver with questions to the root zone triggering cache misses (with `for i in $(seq 0 9999); do echo "$i A"; done | dnsperf -q 100 -a 2.0.0.1 -s 1.0.0.1`. Capture the traffic to a PCAP file.
4. Observe the PCAP for `Destination unreachable (Port unreachable)` packets which are results of the early retransmits (as seen in the screenshot from Wireshark below).
![image](/uploads/65e2c62d0b22abdb5f4c89e2c806e37b/image.png)
These retransmits happen after 110 to 150 ms after the original transmit. This effect is more pronounced if the number of outstanding queries is higher (the `-q` argument to `dnsperf`). About 1 % of queries from this test result in retransmit.
Verbose log the query from screenshot show the two queries as well
```
[00000.00][plan] plan '7251.' type 'A' uid [07251.00]
[07251.00][iter] '7251.' type 'A' new uid was assigned .01, parent uid .00
[07251.01][cach] => no NSEC* cached for zone: .
[07251.01][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[07251.01][resl] => going insecure because there's no covering TA
[07251.01][zcut] found cut: . (rank 020 return codes: DS -2, DNSKEY -2)
[07251.01][resl] => id: '24592' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '7251.' qtype: 'A' proto: 'udp'
[07251.01][resl] => id: '24592' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '7251.' qtype: 'A' proto: 'udp'
[07251.01][iter] <= rcode: NOERROR
[07251.01][cach] => stashed 7251. A, rank 020, 20 B total, incl. 0 RRSIGs
[07251.01][resl] <= server: '1.0.0.100' rtt: 148 ms
[07251.01][resl] AD: request NOT classified as SECURE
[07251.01][resl] finished: 4, queries: 1, mempool: 16400 B
```
Now that I think of it, this is probably caused by the packet being stuck in a buffer somewhere, but I think resolver should cope with this in better ways than to generate more traffic. 110 ms timeout seems too low for a server with 100 ms latency. This is therefore related to #447.https://gitlab.nic.cz/knot/knot-resolver/-/issues/558Redudant parallel queries for nonexistent AAAA records generated when quering...2021-01-04T11:03:37+01:00Štěpán BalážikRedudant parallel queries for nonexistent AAAA records generated when quering for names from one zone repeatatedlySteps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 6048...Steps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 604800 86400
*. 3600 IN A 1.1.1.1
. 86400 IN NS j.root-servers.net.
j.root-servers.net. 86400 IN A 1.0.0.100
```
2. Start resolver on 1.0.0.1 with these root hints capturing the traffic to PCAP
```
. 3600000 NS J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET. 3600000 A 1.0.0.100
```
3. Query the resolver with questions to the root zone triggering cache misses (with `for i in $(seq 0 $(( 2 * N ))); do echo "$i A"; done | dnsperf -q $N -a 2.0.0.1 -s 1.0.0.1`). Capture the traffic to a PCAP file.
Note that `-q` option in `dnsperf` sets the maximum number of outstanding queries.
Now observe the PCAP. First N answers from resolver look OK, the next N however are more than 4 times slower than the rest.
This is a example for N=20:
![image](/uploads/b0f7ac9519b792f7ccb0e431a01f5caa/image.png)
Verbose log for all of these looks like this:
```
[00000.00][plan] plan '20.' type 'A' uid [00020.00]
[00020.00][iter] '20.' type 'A' new uid was assigned .01, parent uid .00
[00020.01][cach] => no NSEC* cached for zone: .
[00020.01][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.01][resl] => going insecure because there's no covering TA
[00020.01][zcut] found cut: . (rank 020 return codes: DS -2, DNSKEY -2)
[00020.01][plan] plan 'j.root-servers.net.' type 'AAAA' uid [00020.02]
[00020.02][iter] 'j.root-servers.net.' type 'AAAA' new uid was assigned .03, parent uid .01
[00020.03][cach] => no NSEC* cached for zone: .
[00020.03][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.03][iter] <= rcode: NOERROR
[00020.03][iter] <= retrying with non-minimized name
[00020.03][cach] => not overwriting NS net.
[00020.03][resl] <= server: '1.0.0.100' rtt: 64 ms
[00020.03][iter] 'j.root-servers.net.' type 'AAAA' new uid was assigned .04, parent uid .01
[00020.04][iter] <= rcode: NOERROR
[00020.04][cach] => not overwriting AAAA j.root-servers.net.
[00020.04][resl] <= server: '1.0.0.100' rtt: 101 ms
[00020.01][iter] '20.' type 'A' new uid was assigned .05, parent uid .00
[00020.05][plan] plan 'j.root-servers.net.' type 'A' uid [00020.06]
[00020.06][iter] 'j.root-servers.net.' type 'A' new uid was assigned .07, parent uid .05
[00020.07][cach] => skipping unfit NS packet: rank 020, new TTL 86400
[00020.07][cach] => no NSEC* cached for zone: .
[00020.07][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.07][iter] <= rcode: NOERROR
[00020.07][iter] <= retrying with non-minimized name
[00020.07][cach] => not overwriting NS net.
[00020.07][resl] <= server: '1.0.0.100' rtt: 100 ms
[00020.07][iter] 'j.root-servers.net.' type 'A' new uid was assigned .08, parent uid .05
[00020.08][iter] <= rcode: NOERROR
[00020.08][cach] => not overwriting A j.root-servers.net.
[00020.08][resl] <= server: '1.0.0.100' rtt: 100 ms
[00020.05][iter] '20.' type 'A' new uid was assigned .09, parent uid .00
[00020.09][resl] => id: '42223' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '20.' qtype: 'A' proto: 'udp'
[00020.09][iter] <= rcode: NOERROR
[00020.09][cach] => stashed 20. A, rank 020, 20 B total, incl. 0 RRSIGs
[00020.09][resl] <= server: '1.0.0.100' rtt: 103 ms
[00020.09][resl] AD: request NOT classified as SECURE
[00020.09][resl] finished: 4, queries: 3, mempool: 16400 B
```
Two `AAAA` and one `A` resolver queries are generated unnecessarily for each of these client queries.https://gitlab.nic.cz/knot/knot-resolver/-/issues/559handle conflicting trust anchor & negative trust anchor definitions2020-05-07T08:36:57+02:00Vladimír Čunátvladimir.cunat@nic.czhandle conflicting trust anchor & negative trust anchor definitionsPeople could reasonably expect that adding a root negative trust anchors would disable validation (everywhere)
```lua
trust_anchors.set_insecure({'.'})
```
but that is not so, at least if built with `-Dkeyfile_default=foo` (usual in dist...People could reasonably expect that adding a root negative trust anchors would disable validation (everywhere)
```lua
trust_anchors.set_insecure({'.'})
```
but that is not so, at least if built with `-Dkeyfile_default=foo` (usual in distros; maybe in some other configs as well).
Our documented way to _completely_ disable validation seems to work
```lua
trust_anchors.remove('.')
```
and we certainly discourage such things, so I don't expect this to be an important issue. In particular, using NTAs below root seems to work fine. _I suspect the issue is having both TA and NTA on the same name._https://gitlab.nic.cz/knot/knot-resolver/-/issues/562error: /usr/lib/knot-resolver/kres_modules/prefill.lua:32: attempt to index f...2020-04-14T09:23:55+02:00Gaspard d'Hautefeuilleerror: /usr/lib/knot-resolver/kres_modules/prefill.lua:32: attempt to index field 'bg_worker' (a nil value)Hi,
I have this small error when I use the prefill module in latest version.
Thanks,
HLFHHi,
I have this small error when I use the prefill module in latest version.
Thanks,
HLFHhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/563cache problems with DRBD backed systems2020-04-16T09:05:12+02:00Matt Taggartcache problems with DRBD backed systemsWe have a VM that is running knot resolver(5.0.1) and that VM is managed by [Ganeti](http://www.ganeti.org/) and is running on top of [DRBD](https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device) with LVM LVOLs hosted on SSDs...We have a VM that is running knot resolver(5.0.1) and that VM is managed by [Ganeti](http://www.ganeti.org/) and is running on top of [DRBD](https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device) with LVM LVOLs hosted on SSDs below that. DRBD mirrors the disk across servers so ganeti can live migrate VMs from server to server.
This VM is a moderately loaded webserver, so it doing quite a few lookups, but overall traffic on the public interface is under 0.4Mbit/sec. But when we look at the network bandwidth associated with the DRBD device we are seeing 400Mbit/sec (50MBytes/sec). As an experiment we put the cache in a tmpfs as suggested in the [docs](https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#persistence) and that completely eliminated the traffic.
So something about the way the disk caching is working is resulting in i/o patterns that don't work with DRBD. A couple things I thought of:
* DRBD only sends writes across the wire, reads can be resolved locally. So this is some sort of write i/o
* This is not just lookup associated i/o being written to cache, as I mentioned the traffic into the system is much lower. So there is some sort of amplification going on here. Maybe large sections of cache are getting flushed every time there is a small update?
* How often is data in RAM being flushed to the disk cache? Does it flush for every query or is it timed or when hitting some size threshold? Say it flushes every 10 seconds, does that mean on server reboot that the cache that gets loaded would only be missing the last 10 seconds of queries?
* We have other VMs that are running knot resolver and are not seeing this issue, but they don't do as much web traffic. We suspect they are also causing more i/o, but we don't have good numbers for this yet.
I think a good test would be setting up a DRBD between two hosts and putting the cache on that (and that would help rule out qemu/ganeti/LVM) and then hitting it with a bunch of lookups.
Let me know if you have questions or ideas of things to try, or need more details to reproduce. Thanks.https://gitlab.nic.cz/knot/knot-resolver/-/issues/565consider libjudy for a cache backend2020-04-15T12:19:07+02:00Matt Taggartconsider libjudy for a cache backendIn reading more about the lmdb cache backend I remembered something I read about a while back, libjudy. Here's is an introduction:
http://judy.sourceforge.net/doc/10minutes.htm
I have no idea if it's appropriate for your use case, but ...In reading more about the lmdb cache backend I remembered something I read about a while back, libjudy. Here's is an introduction:
http://judy.sourceforge.net/doc/10minutes.htm
I have no idea if it's appropriate for your use case, but thought it was worth letting you know.
Available in Debian/Fedora/CentOS/etc.
Thanks
Edit: This wikipedia [article](https://en.wikipedia.org/wiki/Judy_array) has some more info/critiquehttps://gitlab.nic.cz/knot/knot-resolver/-/issues/566knot-resolver-release doesn't setup proper repos for Ubuntu derivates2020-04-25T15:40:23+02:00Talisker69knot-resolver-release doesn't setup proper repos for Ubuntu derivatesHi folks,
I'm running Linux-Mint 19.2, and I try to install knot-resolver but each time (on 2 boxes) I can't launch the service kresd@1.service. I follow the documentation but ....
![Screenshot_from_2020-04-15_21-32-27](/uploads/a175fc0...Hi folks,
I'm running Linux-Mint 19.2, and I try to install knot-resolver but each time (on 2 boxes) I can't launch the service kresd@1.service. I follow the documentation but ....
![Screenshot_from_2020-04-15_21-32-27](/uploads/a175fc08867d89a37698de816c9b104d/Screenshot_from_2020-04-15_21-32-27.png)
Is it necessary to remove (or disable) systemd-resolve before installing knot-resolver ?
Thank in advance for you help
Regards
JCTomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/567function policy.mepattern(action, badword)2020-11-24T16:10:14+01:00Ghost Userfunction policy.mepattern(action, badword)Hello, this is not a problem, but rather a wish to expand the functionality, maybe someone will come in handy. Because LUA does not fully contain regexp, but only patterns, then I added the following to my program settings:
function pol...Hello, this is not a problem, but rather a wish to expand the functionality, maybe someone will come in handy. Because LUA does not fully contain regexp, but only patterns, then I added the following to my program settings:
function policy.mepattern(action, badword)
return function(_, query)
for i = 1, #badword, 1 do
if string.find(query:name(), ".*"..badword[i]..".*") then
return action
end
end
return nil
end
end
policy.add(policy.mepattern(ansmeip, { "pussy", "sex", "adult", "porno", etc. }))https://gitlab.nic.cz/knot/knot-resolver/-/issues/570impl.c: error: implicit declaration of function 'luaL_traceback'2020-05-08T17:29:05+02:00Josef Schlehoferimpl.c: error: implicit declaration of function 'luaL_traceback'When I tried to compile the latest Knot Resolver 5.1.0 for Turris OS 3.x series, where is used gcc 4.8.4, I am getting this error, which is treated as error due to gcc's option (-Werror=implicit-function-declaration).
```
../daemon/bind...When I tried to compile the latest Knot Resolver 5.1.0 for Turris OS 3.x series, where is used gcc 4.8.4, I am getting this error, which is treated as error due to gcc's option (-Werror=implicit-function-declaration).
```
../daemon/bindings/impl.c
../daemon/bindings/impl.c: In function 'lua_error_p':
../daemon/bindings/impl.c:52:2: error: implicit declaration of function 'luaL_traceback' [-Werror=implicit-function-declaration]
luaL_traceback(L, L, "error occured here (config filename:lineno is at the bottom, if config is involved):", 0);
^
cc1: some warnings being treated as errors
```
I'm not sure if there is a possibility to change the gcc option in Turris OS 3.x series as it is in maintenance mode.https://gitlab.nic.cz/knot/knot-resolver/-/issues/571broken links in docs2020-05-18T10:16:35+02:00Petr Špačekbroken links in docsVersion 5.1.0 has bunch of broken links in documentation.
List of broken links from urichecker:
[broken_links.html](/uploads/c54dcf6f2955904df3fac322bc314257/broken_lins.html)Version 5.1.0 has bunch of broken links in documentation.
List of broken links from urichecker:
[broken_links.html](/uploads/c54dcf6f2955904df3fac322bc314257/broken_lins.html)https://gitlab.nic.cz/knot/knot-resolver/-/issues/572Docker images for ARM & ARM642020-05-28T04:11:05+02:00Sebastiaan DeckersDocker images for ARM & ARM64Any chance of publishing ARM & ARM64 Docker builds?
I currently use the Ubuntu packages on ARM (Odroid and Raspberry Pi). Would prefer using the Docker images for convenience and to compose more easily with other services.
Thanks!Any chance of publishing ARM & ARM64 Docker builds?
I currently use the Ubuntu packages on ARM (Odroid and Raspberry Pi). Would prefer using the Docker images for convenience and to compose more easily with other services.
Thanks!https://gitlab.nic.cz/knot/knot-resolver/-/issues/574kresd occasionally fails to resolve domain names until restarted2020-05-25T15:36:05+02:00Ghost Userkresd occasionally fails to resolve domain names until restartedOn my Turris Omnia, since at least a few weeks or months ago (sorry I haven't kept track of when the problem appeared for the first time), kresd fails to resolve some domain names (with SERVFAIL) until it gets restarted or its cache clea...On my Turris Omnia, since at least a few weeks or months ago (sorry I haven't kept track of when the problem appeared for the first time), kresd fails to resolve some domain names (with SERVFAIL) until it gets restarted or its cache cleared.
I have not kept track of all the affected domains (and there are probably some I didn't notice), but one that is often affected is one I control (sitedethib.com). I suspect that all affected domains use DNSSEC, but I am not sure of it.
The fact that it occurs more often for my domain than for others may indicate misconfiguration on my side, but the failures are somewhat recent while I haven't changed the nameserver's setup for years.
I don't have any more logs or information, but if you can think of a way to get more information next time it happens, I will be happy to oblige.https://gitlab.nic.cz/knot/knot-resolver/-/issues/575Suspecting the issue with 5.1.0 version related to unfit CNAME RR2020-08-17T10:23:34+02:00dlavrichevSuspecting the issue with 5.1.0 version related to unfit CNAME RRHi guys, last night I installed the new version of Knot Resolver (5.1.0) and I see that one of our critical domains is not being resolved properly when I set a policy to forward all requests to specified DNS:
```
-- forward all traffic ...Hi guys, last night I installed the new version of Knot Resolver (5.1.0) and I see that one of our critical domains is not being resolved properly when I set a policy to forward all requests to specified DNS:
```
-- forward all traffic to specified Azure IP addresses (selected automatically)
policy.add(policy.all(policy.FORWARD({'168.63.129.16'})))
```
When I run trace I get the following message:
```
[65570.01][iter] <= rcode: NOERROR
[65570.01][iter] <= cname chain, following
[65570.02][iter] '1803xxx.group21.sites.hubspot.net.' type 'A' new uid was assigned .03, parent uid .00
[65570.03][cach] => skipping unfit CNAME RR: rank 060, new TTL -908
```
If I run dig commands against the resolver I have to specify that I am looking for CNAME - then it works:
dig www.mydomain.com @10.64.33.8. - no result
then
dig -t cname www.mydomain.com @10.64.33.8 - works fine
I get result:
```
;; ANSWER SECTION:
www.mydomain.com. 21 IN CNAME 1803xxx.group21.sites.hubspot.net.
```
This behavior is different from Knot Resolver 5.0.1 - that version worked fine.
I tried to google this and found this response located at: https://lists.nic.cz/pipermail/knot-resolver-users/2019/000179.html
"
Hello.
You seem to be hitting quite a common stumbling point - you see:
On 5/23/19 3:10 AM, B. Cook wrote:
> [cach] => NSEC sname: covered by: pccw. -> pe., new TTL 83379
i.e. kresd has a record from the root zone proving that there's no .pcsd
TLD, so it answers NXDOMAIN. Usually it doesn't get such a record at
start, and it might obtain a positive record that takes precedence, too,
so it's easy to get confused... combining contradictory records is just
tricky. I expect you can configure these particular TLD names - see
docs for details:
https://knot-resolver.readthedocs.io/en/latest/modules.html#replacing-part-of-the-dns-tree
--Vladimir
"
Unfortunately, the provided link https://knot-resolver.readthedocs.io/en/latest/modules.html#replacing-part-of-the-dns-tree is broken.https://gitlab.nic.cz/knot/knot-resolver/-/issues/576"telemachus/tapered" is not available2020-05-21T22:08:06+02:00Héctor Molinero Fernández"telemachus/tapered" is not availableI don't know when this happened, but it seems that [one of the submodules](https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.1.1/.gitmodules#L7-9) that this repository uses is no longer available.
```
Cloning into '/tmp/knot-resol...I don't know when this happened, but it seems that [one of the submodules](https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.1.1/.gitmodules#L7-9) that this repository uses is no longer available.
```
Cloning into '/tmp/knot-resolver/tests/config/tapered'...
fatal: could not read Username for 'https://github.com': No such device or address
fatal: clone of 'https://github.com/telemachus/tapered.git' into submodule path '/tmp/knot-resolver/tests/config/tapered' failed
Failed to clone 'tests/config/tapered' a second time, aborting
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/577prefill: Address family not supported by protocol2020-08-17T10:24:32+02:00Sebastiaan Deckersprefill: Address family not supported by protocolI see this error in the logs. What could be causing it? (Machine has both IPv4 and IPv6 Internet access.)
### Error
```
kresd [prefill] cannot download new zone (/usr/lib/knot-resolver/kres_modules/prefill.lua:85: [prefill] fetch of `h...I see this error in the logs. What could be causing it? (Machine has both IPv4 and IPv6 Internet access.)
### Error
```
kresd [prefill] cannot download new zone (/usr/lib/knot-resolver/kres_modules/prefill.lua:85: [prefill] fetch of `https://www.internic.net/domain/root.zone` failed: HTTP client library error: Address family not supported by protocol (97)), will retry root zone download in 09 minutes 55 seconds
```
### Platform
- Ubuntu 18.04.4 LTS (GNU/Linux 5.4.10-x86_64-linode132 x86_64)
### Status
```
$ systemctl status kresd@1.service
● kresd@1.service - Knot Resolver daemon
Loaded: loaded (/lib/systemd/system/kresd@.service; indirect; vendor preset: enabled)
Active: active (running) since Thu 2020-05-14 02:42:01 UTC; 1 weeks 4 days ago
Docs: man:kresd.systemd(7)
man:kresd(8)
Main PID: 944 (kresd)
Tasks: 1 (limit: 1149)
CGroup: /system.slice/system-kresd.slice/kresd@1.service
└─944 /usr/sbin/kresd -c /usr/lib/knot-resolver/distro-preconfig.lua -c /etc/knot-resolver/kresd.conf -n
```
### Package
- Version: `5.1.0`
- Repo: `http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_18.04/ /`
- Name: `home:CZ-NIC:knot-resolver-latest`
### Config
```
-- Cache
-- Path: /var/cache/knot-resolver
cache.size = 100*MB
-- Network
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
-- DoT for CA connections
modules.load('experimental_dot_auth')
-- Cache roots
modules.load('prefill')
prefill.config({
['.'] = {
url = 'https://www.internic.net/domain/root.zone',
ca_file = '/etc/ssl/certs/ca-certificates.crt', -- Ubuntu specific
interval = 86400 -- seconds
}
})
-- Keep frequent domains fresh
modules.load('predict')
predict.config({
window = 15, -- 15 minutes sampling window
period = 6*(60/15) -- track last 6 hours
})
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/579SERVFAIL due DNSSEC validation for various domains2020-06-02T10:25:41+02:00MartinSERVFAIL due DNSSEC validation for various domainsHello,
I'm experiencing SERVFAIL answers from resolver for various domains. After first request, it takes several minutes (cache) while resolver is returning SERVFAIL, and then it starts working normally.
Resolver is configured to forwa...Hello,
I'm experiencing SERVFAIL answers from resolver for various domains. After first request, it takes several minutes (cache) while resolver is returning SERVFAIL, and then it starts working normally.
Resolver is configured to forward queries to 1.1.1.1 with TLS.
```
dig @127.0.0.1 www.pjatak.cz
; <<>> DiG 9.11.14-RedHat-9.11.14-2.fc31 <<>> @127.0.0.1 www.pjatak.cz
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 54116
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.pjatak.cz. IN A
;; Query time: 390 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Ne máj 31 15:18:59 CEST 2020
;; MSG SIZE rcvd: 42
```
resolver logs:
```
máj 31 15:18:42 localhost.localdomain kresd[1569]: [00000.00][plan] plan 'www.pjatak.cz.' type 'A' uid [02431.00]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.00][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .01, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => skipping unfit nsec_p: new TTL -18151343, error -116
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => trying zone: cz., NSEC3, hash 1479a4a7
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 2: hash 1va82mb76jvko8sbtcr95qivtu3fpkrr
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for www.pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 1: hash g1eiq4lu8266c9lc3bdp3butasbllif5
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 0: hash 5571d26g1u4qeqgoheriiiorkjq0rlba
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => trying zone: cz., NSEC3, hash 3662e2e8
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 2: hash j6te5f35rn1stuav53lv9g1ptjehh9ub
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for www.pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 1: hash 2p69cpiadt9t2qnk2e5rd2s3rlpl51h0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 depth 0: hash slugdha9hu87ndl6j49km4e99n33b518
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][plan] plan '.' type 'DNSKEY' uid [02431.02]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.02][iter] '.' type 'DNSKEY' new uid was assigned .03, parent uid .01
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.03][cach] => satisfied by exact RRset: rank 060, new TTL 3707
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.03][iter] <= rcode: NOERROR
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.03][vldr] <= parent: updating DNSKEY
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.03][vldr] <= answer valid, OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.01][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .04, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.04][plan] plan 'cz.' type 'DS' uid [02431.05]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.05][iter] 'cz.' type 'DS' new uid was assigned .06, parent uid .04
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.06][cach] => satisfied by exact RRset: rank 060, new TTL 71684
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.06][iter] <= rcode: NOERROR
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.06][vldr] <= DS: OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.06][vldr] <= parent: updating DS
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.06][vldr] <= answer valid, OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.04][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .07, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.07][plan] plan 'cz.' type 'DNSKEY' uid [02431.08]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.08][iter] 'cz.' type 'DNSKEY' new uid was assigned .09, parent uid .07
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.09][cach] => satisfied by exact RRset: rank 060, new TTL 8225
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.09][iter] <= rcode: NOERROR
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.09][vldr] <= parent: updating DNSKEY
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.09][vldr] <= answer valid, OK
áj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.07][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .10, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.10][plan] plan 'pjatak.cz.' type 'DS' uid [02431.11]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.11][iter] 'pjatak.cz.' type 'DS' new uid was assigned .12, parent uid .10
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => skipping exact RR: rank 060 (min. 030), new TTL -18666143
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => trying zone: cz., NSEC3, hash 1479a4a7
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 depth 1: hash g1eiq4lu8266c9lc3bdp3butasbllif5
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 depth 0: hash 5571d26g1u4qeqgoheriiiorkjq0rlba
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => trying zone: cz., NSEC3, hash 3662e2e8
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 depth 1: hash 2p69cpiadt9t2qnk2e5rd2s3rlpl51h0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 depth 0: hash slugdha9hu87ndl6j49km4e99n33b518
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [ ][nsre] score 21 for 1.1.1.1#00853; cached RTT: 12
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][resl] => id: '48127' querying: '1.1.1.1#00853' score: 21 zone cut: 'cz.' qname: 'pjAtAk.cZ.' qtype: 'DS' proto: 'tcp'
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Preparing Packet Application Data(23) with length: 40 and min pad: 0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Sent Packet[22] Application Data(23) in epoch 2 and length: 62
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: SSL 3.3 Application Data packet received. Epoch 2, length: 487
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Expected Packet Application Data(23)
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Received Packet Application Data(23) with length: 487
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Decrypted Packet[22] Application Data(23) with length: 470
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: buffers.c[_gnutls_io_read_buffered]:589
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1775
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][iter] <= rcode: NOERROR
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][vldr] <= DS: OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][vldr] <= parent: updating DS
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][vldr] <= answer valid, OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][cach] => stashed pjatak.cz. DS, rank 060, 178 B total, incl. 1 RRSIGs
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.12][resl] <= server: '1.1.1.1' rtt: 14 ms
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.10][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .13, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.13][plan] plan 'pjatak.cz.' type 'DNSKEY' uid [02431.14]
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.14][iter] 'pjatak.cz.' type 'DNSKEY' new uid was assigned .15, parent uid .13
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => skipping exact RR: rank 060 (min. 030), new TTL -18667943
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => skipping unfit nsec_p: new TTL -18151343, error -116
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => trying zone: cz., NSEC3, hash 1479a4a7
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 depth 1: hash g1eiq4lu8266c9lc3bdp3butasbllif5
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 depth 0: hash 5571d26g1u4qeqgoheriiiorkjq0rlba
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => trying zone: cz., NSEC3, hash 3662e2e8
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 depth 1: hash 2p69cpiadt9t2qnk2e5rd2s3rlpl51h0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 encloser error for pjatak.cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 depth 0: hash slugdha9hu87ndl6j49km4e99n33b518
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => NSEC3 encloser error for cz.: range search found stale or insecure entry
máj 31 15:18:42 localhost.localdomain kresd[1569]: [ ][nsre] score 21 for 1.1.1.1#00853; cached RTT: 13
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][resl] => id: '06792' querying: '1.1.1.1#00853' score: 21 zone cut: 'pjatak.cz.' qname: 'PJAtak.Cz.' qtype: 'DNSKEY' proto: 'tcp'
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Preparing Packet Application Data(23) with length: 40 and min pad: 0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Sent Packet[23] Application Data(23) in epoch 2 and length: 62
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: SSL 3.3 Application Data packet received. Epoch 2, length: 2359
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Expected Packet Application Data(23)
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Received Packet Application Data(23) with length: 2359
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Decrypted Packet[23] Application Data(23) with length: 2342
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: buffers.c[_gnutls_io_read_buffered]:589
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1775
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][iter] <= rcode: NOERROR
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][vldr] <= parent: updating DNSKEY
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][vldr] <= answer valid, OK
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][cach] => stashed pjatak.cz. DNSKEY, rank 060, 2150 B total, incl. 2 RRSIGs
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.15][resl] <= server: '1.1.1.1' rtt: 125 ms
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.13][iter] 'www.pjatak.cz.' type 'A' new uid was assigned .16, parent uid .00
máj 31 15:18:42 localhost.localdomain kresd[1569]: [ ][nsre] score 21 for 1.1.1.1#00853; cached RTT: 69
máj 31 15:18:42 localhost.localdomain kresd[1569]: [02431.16][resl] => id: '37496' querying: '1.1.1.1#00853' score: 21 zone cut: 'pjatak.cz.' qname: 'WwW.pjatAK.Cz.' qtype: 'A' proto: 'tcp'
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Preparing Packet Application Data(23) with length: 44 and min pad: 0
máj 31 15:18:42 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Sent Packet[24] Application Data(23) in epoch 2 and length: 66
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: SSL 3.3 Application Data packet received. Epoch 2, length: 487
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Expected Packet Application Data(23)
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Received Packet Application Data(23) with length: 487
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (5) REC[0x560140e94700]: Decrypted Packet[24] Application Data(23) with length: 470
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: buffers.c[_gnutls_io_read_buffered]:589
máj 31 15:18:43 localhost.localdomain kresd[1569]: [gnutls] (3) ASSERT: record.c[_gnutls_recv_int]:1775
máj 31 15:18:43 localhost.localdomain kresd[1569]: [02431.16][iter] <= rcode: SERVFAIL
máj 31 15:18:43 localhost.localdomain kresd[1569]: [02431.16][resl] finished: 8, queries: 5, mempool: 98400 B
```
```
dig @8.8.8.8 www.pjatak.cz
; <<>> DiG 9.11.14-RedHat-9.11.14-2.fc31 <<>> @8.8.8.8 www.pjatak.cz
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21293
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.pjatak.cz. IN A
;; ANSWER SECTION:
www.pjatak.cz. 599 IN CNAME pjatak.cz.
pjatak.cz. 599 IN A 37.205.10.111
;; Query time: 40 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Ne máj 31 15:19:10 CEST 2020
;; MSG SIZE rcvd: 72
```
```
dig +sigchase @8.8.8.8 www.pjatak.cz
;; RRset to chase:
www.pjatak.cz. 599 IN CNAME pjatak.cz.
;; RRSIG of the RRset to chase:
www.pjatak.cz. 599 IN RRSIG CNAME 7 2 600 20201012182307 20191013182307 56890 pjatak.cz. ILO+Gb37b2rzE2zCFg3m0Vn6Yoc16Uw3HV+JbgrAVXGHdu10GZWUYTKB F9JXZfZqh7/8npL1rLe8PqkEAkBWjHun3CRe8wOrxkf4rIBnsVPRGr2e tLhVnJIElBCmEltg+HqHhxW6GfOYZ+sxBCzLz8A5ojBz97Wip78wc8nR FN4j2TrzG1SkvkjXNClXokrhA22R4ocT/S1ymv9Ihchtk6WDbBn3YsAl URJZ07ejqpX2Mof9b0YtgCEKemljY6uWwNS84ttxBciRZN1FswlZ9vk3 nwFowfD1AVV+e9lf3Qvjgs9WE8NM6vGJXj3gh4Fu0npk9cI55y1662iu 8Hf7ozagXIte9RGCBo7Cs6H1hOrquqBcDY0CvWynKze2YVOZnAsA4OD3 AvJ/CD4LPXdPwz/cH+UL1aNEf6cV4lJTEiXogcG/ToJXnG3K+unKew+x l18GzQpTZ7sXdHZxmwYscfq1icb5CM8eZmuTjgkm2ALK2au60BKpfEbW X0fEBzxhYEVgaL7i2aMmooVRs+EtnLY/UzXQ30v/Wpp02deRVZlBpsDe Q7vYLdLoJMlgVtL5bauM/nqEeuYNWuux/xIxlHPmAxp0QwjPGxIiLS/v rAl4KPYmmnko1PRNGko624xkw+l5mg/ogwu++DKONhZcd71fUdfXkV6N Yckt7wuCdzA=
Launch a query to find a RRset of type DNSKEY for zone: pjatak.cz.
;; DNSKEYset that signs the RRset to chase:
pjatak.cz. 1799 IN DNSKEY 256 3 7 BQEAAAAB9QdSKCeptWzBV1tlAFU+AaiSGgU7XaUY6YHgtUQ1ggPvJApl v3N9Xt8vB+z7/FThK4gxzQ3xZ+Y0xZ0sEnQVdl05XTfrrOGjwr6x3Pwx wepWONVZ6FXDb+LhEFx95kYOIjhLnyr5UvOHu9vFOEu1mMENo9gdg00D MBX+tXNBxiHIXzsFUE90QmXBro2GH4EHqTX+4ZVuNCOFhzAnp3h+O7SQ TR8npmdRrmWJvC42uT6ODGEFOnstZ+vJDQnc3ZLvzJvuaXK9pUGJmIQ7 5MEs9xcngf7JXRmW/FqsTph0ZcAXUoad8+Tu43Z0+V1Znf7WtCfODqqj KsKklt7CdY7R6NzEV9b5F//rvG88wZeg+PKXNVbQFFSzyguEsFrvjrTT hdKjyDPfbMRl/vMeoB/dfnB1VP5Ds1zMpqqYqiPPVLBmCjuRC2EalK3t Ph3y9U2xE+A2vytXETew+T+nX9ZG62rS7YnKwsMYrSzUPDTXYgCVwsDM /2Ecl5XEpemOnTvMmQGh7LUuYs/kK2hImPew5ntAQC6jnGr37xC3xtBf kFQN4sV5iSOvZWs5mjP2iEhGEFl5fRqU0Zvck0vOCHBBU0oRj8k4VUpU KSFe2W4iKolj2VS9Jr7S5WIFGFMHUfhyC1j5FVAqSyLBnEKOWper3O+d MQrtWTRl5L1H7v96O/E=
pjatak.cz. 1799 IN DNSKEY 257 3 7 BQEAAAABw0H2Xb7JjIuMMVRD3oqWpoXsriUK4sCT2B0TAc9b6v7K+gEI fhtrQ+LImQ/yY4VLZ1z88RDe48LvV2kA3fjB+4tFJTsgmgxCAg29skRN orVLnb6ztSqZO3FuTYgH3yywEw3W4rTkPfthNhiaMEVXVrFDDU4dGhiJ mvIa9mkaPOkIKeRV4gJqs2YSEIhCKeMxkNNGLn1CIXAiFjVbVDcYFv0n 1bBY2iDUllDIRZapMfoSwJMnHI6VXz3CGjxIfcFcr+BUfVFhobqyV848 n4HJcHKMgErtC8xFmRD++Pq/isLbNs48zDSZQY5jJvD30anwzZnzhWJJ 2ZlirUm6pIazB3a6A7V3c381TsRAyY8suy5pkEriSVs4wSfHkiiwd3Z1 sHCTHgefwyRrArFycXR4bvz9sSFOCjbZfJ4S2RFchQa2D+IJsea+kXa+ LGOi2enMd6Jaq5+WB6dUkgWz+9a0/xqCC2ShywyWeazuoLaaejL8NUDf sGj4TEHfkXX+/BodFl6SicWsQEZuNU44/+pyyFqgDKsHu9t8mDtz/IGR Z/Duj9GKTQ4j953Czkic0thvFwqqd6Xm+C48K1qIB1vWqV4AinXDVf/q jbkPxGP01P+riUs5E0zTEoJOtyTtm/xoV5lTwe2PvhysrtGmcTdyqZXD Z6DQnUgkO7BUjlprbnk=
;; RRSIG of the DNSKEYset that signs the RRset to chase:
pjatak.cz. 1799 IN RRSIG DNSKEY 7 2 1800 20201012182307 20191013182307 52247 pjatak.cz. g+D2/BTsi2GpQaqjYpTZbv8VrJcliK/3k05bXiigw0h0uIZdJTTckrHZ mF4kpSO7eT/8HCRRAMgt2fSFBqz1MYpZsdLC6qL1nbssjoHKJlAIrjAU faix6eGuGGBx0iaeaXVtirmOfdX1FzQ5ZinXy5jw6AQdU+95rIXJLA2y IiJHTEkR8ChkBcZSeXccI1KSxQYwQXxwdhDr3ieOIG8b7CYKOgZyOgf8 kalwVg5aasjyU5LSi8YAmrLNg1yji2L2Qm/C0lr2GFdEqbD0cOOL6rl/ lmfzUesrNJu2hznOuLWbwfOdyl13d2U/EVCC7DKZ0F6H6qAXV6eKDp/R RKT9Nd5q2UwcX5jYjYf2qVe8zn4FKezB3fT9SYDVVsMF9oSznM0hehQo UxtqaQDwc5o/eYvc5OZaUQanPuZv5znxPUMtUlC2KW3bQt6Teu0sPofH V6SJRFrSsAD763+oe+x1uCAtPbw07WpPuOZez/jwH5fgUK2wGXOSZh1/ 8Q3Yq7Cl9mGzs/H9aoBT4NkYbsZ3xsNoWm9tNRCBPNkbsQjjA6m8KAx7 8LRyHfHgO6xdFoyAILSHYltyJYKrToRp6iAGfhqCFLTDXk67iNJBovRJ RUPZsBozDiQ525Bx3uw7ilv34ctJmZnEeUzJg2dfw1RaubITnwAVmySG ys2H3yPWmu8=
pjatak.cz. 1799 IN RRSIG DNSKEY 7 2 1800 20201012182307 20191013182307 56890 pjatak.cz. gndqT11S4pB6cBrlzTLhmUsnRCtG1tvkB5w7yT9ZAXFbY/UPUi9j0UQf Tc+IdyFlJSQ/1dgIykbuW2iS59QYGN9N/gw7CfTMbei5oE//oyiztfav 1BRc68R9czF6MCyBOHeuz1lthcoW5GiIAFotl7vMQAKc1kdfBYBA5gyT V/Q3CWkfn0qJiikpkvDeXyQltzgL4n9Kzu+U9nj5gWaoMJx84jcXHPm+ 0zEP/GFxSWtg1B97u21tqyWLn0A1kG0jDXCgclrnh8ok+Pm/G5PC7nj3 rg9HrHcUhRjf5dcLtb9+rPcPOMnHyimcRfTEkKCVjkG5zKJpJIJ8I84i 8u8D+/sGRDAKQzndgh2koNyiiJa57zQay5Z4n2ipmRgSW7bpgQHLiwTu pw4r7Ir1jkTFTRAch9xeo3q28Oxm1n0pFrgGY/guR8hc9rmGo3S7Cfxh yOKsYeTqcccZ7ohdl8mLz6m0IaMehG1C9mQG2p+pkV1Jqd/IL5g2VS7N P2JP3qJ4q4LzlVxqJm18GVFQVfD/KU0pbbzBIIMXiEEnS6LCIeKVizXM jIWxOpZ+PKpoHXEQa4QJyLpst/4cZKhWQ0Jw9C6E7cyHZasu8n6qPsNW 526QMfvwvuXTjJU/8e20HhIHNby9+6lSmW78IZsmjoCxNrK9VLGr3HA0 dOFJAMsbBfs=
Launch a query to find a RRset of type DS for zone: pjatak.cz.
;; DSset of the DNSKEYset
pjatak.cz. 3599 IN DS 52247 7 2 2D07C4B5141F429688C3B65A46E67C1FB1F12E18294B0F537D499858 367ADA74
pjatak.cz. 3599 IN DS 56890 7 2 4354E245F9946922D33EAF0E4FB6FC6E21A9522589C0FCF27DBF1E9C 40501F0C
;; RRSIG of the DSset of the DNSKEYset
pjatak.cz. 3599 IN RRSIG DS 13 2 3600 20200611215106 20200529000420 17880 cz. x77/cRqUShOeQq69K/tNkQaDOdGUawrDyndA1jesm2Enkn7alQx5nXTe GKfzlsL6TMAJzykqTmWjR238QFf+0Q==
;; WE HAVE MATERIAL, WE NOW DO VALIDATION
;; VERIFYING CNAME RRset for www.pjatak.cz. with DNSKEY:56890: success
;; OK We found DNSKEY (or more) to validate the RRset
;; Now, we are going to validate this DNSKEY by the DS
;; OK a DS valids a DNSKEY in the RRset
;; Now verify that this DNSKEY validates the DNSKEY RRset
;; VERIFYING DNSKEY RRset for pjatak.cz. with DNSKEY:52247: success
;; OK this DNSKEY (validated by the DS) validates the RRset of the DNSKEYs, thus the DNSKEY validates the RRset
;; Now, we want to validate the DS : recursive call
Launch a query to find a RRset of type DNSKEY for zone: cz.
;; DNSKEYset that signs the RRset to chase:
cz. 17856 IN DNSKEY 256 3 13 G6mZG1HCWR18kSFRh8pEOQ0YB9n1ZvTekMJ0eydjdmt81mDEgiNQJ7Uo swUSwpx1cx9Gs63STudcK0Fs2lVKGg==
cz. 17856 IN DNSKEY 256 3 13 vCLlUrpvver9SfRlGSZvYrlxaHr+l3EvtLfaIzvZkHVK1aVTBB1a1rMk 4ZfSKFpWD9l2M83k0s92jwD97QklNQ==
cz. 17856 IN DNSKEY 257 3 13 nqzH7xP1QU5UOVy/VvxFSlrB/XgX9JDJzj51PzIj35TXjZTyalTlAT/f 7PAfaSD5mEG1N8Vk9NmI2nxgQqhzDQ==
;; RRSIG of the DNSKEYset that signs the RRset to chase:
cz. 17856 IN RRSIG DNSKEY 13 1 18000 20200605000000 20200529000000 20237 cz. U+pFqltP3ph0g6SfhFiLLMQtO4mWS7R6E72HwUXhV5yJj38vbsvchiKK SRYnWCvn7xatQGa1VqsGRlTn/3BxtQ==
cz. 17856 IN RRSIG DNSKEY 13 1 18000 20200614004859 20200531103539 17880 cz. ld25thn3UE4gJpRaMJuRkM7UIGYG8xiepop9Ez2ySzPFdy5bKnCXJHl8 f3mjmYUK9wUrIVyVLl0LAs8BnDw8zQ==
Launch a query to find a RRset of type DS for zone: cz.
;; DSset of the DNSKEYset
cz. 85902 IN DS 20237 13 2 CFF0F3ECDBC529C1F0031BA1840BFB835853B9209ED1E508FFF48451 D7B778E2
;; RRSIG of the DSset of the DNSKEYset
cz. 85902 IN RRSIG DS 8 1 86400 20200613050000 20200531040000 48903 . p0WkKaNOaIJzhfcqurL9H3KIJ/zKjhUXuNHnYivBZIknSauUAN7seYpv FxTme0ui7Ik075nYJPzzV+s66EEN103syKECI0g4B3KxQZ+KiYJ3X9ZC Li09nATY76ATHzqRuLMVti1QGo6AgleTUhl7okQBm8x+9BNTkInK6GE9 LAQ7NsbTucRqwQ2uqKUfRzqUeRugHV/EdnvXVYGefF3QQfmf8d6ueJ8w sx4VNxWCBz91ds67Y3Ba9oBnZDk/GeWmEmnUDjbu1fwyN4DyO8xGLrP7 ytKAKw07wGXIncVGN3W3RfvwsTceV3+zc2TroSP1ZyngNT/oWo0pp0qz 34CO9A==
;; WE HAVE MATERIAL, WE NOW DO VALIDATION
;; VERIFYING DS RRset for pjatak.cz. with DNSKEY:17880: success
;; OK We found DNSKEY (or more) to validate the RRset
;; Now, we are going to validate this DNSKEY by the DS
;; OK a DS valids a DNSKEY in the RRset
;; Now verify that this DNSKEY validates the DNSKEY RRset
;; VERIFYING DNSKEY RRset for cz. with DNSKEY:20237: success
;; OK this DNSKEY (validated by the DS) validates the RRset of the DNSKEYs, thus the DNSKEY validates the RRset
;; Now, we want to validate the DS : recursive call
Launch a query to find a RRset of type DNSKEY for zone: .
;; DNSKEYset that signs the RRset to chase:
. 44409 IN DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3 +/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kv ArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF 0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+e oZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfd RUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwN R1AkUTV74bU=
. 44409 IN DNSKEY 256 3 8 AwEAAc4qsciJ5MdMUIu4n/pSTsSiU9OCyAanPTe5TcMX4v1hxhpFwiTG QUv3BXT6IAO4litrZKTUaj4vitqHW1+RQsHn3k/gSvt7FwyQwpy0mEnS hBgr6RQiGtlBODNY67sTl+W8M/b6SLTAaaDri3BO5u6wrDs149rMELJA doVBjmXW+zRH3kZzh3lwyTZsYtk7L+3DYbTiiHq+sRB4F9XoBPAz5Psv 4q4EiPq07nW3acbW84zTz3CyQUmQkJT9VB1oUKHz6sNoyccqzcMX4q1G HAYpQ7FAXlKMxidoN1Ay5DWANgTmgJXzKhcI2nIZoq1x3yq4814O1LQd 9QP68gI37+0=
;; RRSIG of the DNSKEYset that signs the RRset to chase:
. 44409 IN RRSIG DNSKEY 8 0 172800 20200611000000 20200521000000 20326 . KfazTUJ4/ezskruozpqOV/AKBcdoMVTxmLSjKaiZuNW6QVAY60/khxOa 0g7EqH/yBZP9pKIIMrxWtRzfl2YzZhkRfkDJKsNFDpyRtq06Hhhf8KHg FNsmmdUBKVtC7jh/5pldrMpInmtrV6344PkDS499x0qfsziD/FQCwF/X 8SWvfqKYmhvE8RjnlsycLtd1vao8iZtTDrevxPZCTRNwDfOufW5jNmDP 0nRKg/U0rXVXxf5q9jVX3Q875Kzyp1eewI2fPBmBXX5Vcpb3We0Gtcec KR0G0nsdXd898GiFlZU2IwrnemLWnCfE6LaoOcKcYzNO8dfMbI1hQzyI Wtnb6Q==
Launch a query to find a RRset of type DS for zone: .
;; NO ANSWERS: no more
;; WARNING There is no DS for the zone: .
;; WE HAVE MATERIAL, WE NOW DO VALIDATION
;; VERIFYING DS RRset for cz. with DNSKEY:48903: success
;; OK We found DNSKEY (or more) to validate the RRset
;; Ok, find a Trusted Key in the DNSKEY RRset: 20326
;; VERIFYING DNSKEY RRset for . with DNSKEY:20326: success
;; Ok this DNSKEY is a Trusted Key, DNSSEC validation is ok: SUCCESS
```
The same output for 1.1.1.1https://gitlab.nic.cz/knot/knot-resolver/-/issues/580add cache usage to cache.stats()2020-07-16T10:30:27+02:00Petr Špačekadd cache usage to cache.stats()Right now it is PITA to determine real cache usage because LMDB file size does not reflect space freed by garbage collector etc.
I propose to add new item `usage` to table returned by `cache.stats()`. It should be float number <0,100> ...Right now it is PITA to determine real cache usage because LMDB file size does not reflect space freed by garbage collector etc.
I propose to add new item `usage` to table returned by `cache.stats()`. It should be float number <0,100> %.
It's value should be computed like `Number of pages used`/`Max pages`. Command line equivalent for testing purposes is:
```
$ mdb_stat -e <cache_path>
Environment Info
Map address: (nil)
Map size: 104857600
Page size: 4096
Max pages: 25600
Number of pages used: 15
Last transaction ID: 766
Max readers: 126
Number of readers used: 0
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/581improved handling of malformed messages over TCP/TLS2021-12-03T16:31:19+01:00Tomas Krizekimproved handling of malformed messages over TCP/TLSCurrently, if kresd receives malformed DNS message, it will close the TCP stream. It was probably meant as a heuristic that orientation in TCP stream was lost. However, this isn't necessarily true, since the client might have sent query ...Currently, if kresd receives malformed DNS message, it will close the TCP stream. It was probably meant as a heuristic that orientation in TCP stream was lost. However, this isn't necessarily true, since the client might have sent query that isn't possible to parse, but prefixed with correct message length.
This can be troublesome in some cases, because closing the stream also means no responses will be sent to the pipelined queries. While sending malformed queries probably isn't the common, it can certainly happen when replaying or mirroring traffic.
Perhaps the condition to close the stream could be relaxed: The stream would be closed only if the dns message length would be less than the header size.
Related #471https://gitlab.nic.cz/knot/knot-resolver/-/issues/582fix locking around cache preallocation2020-06-25T14:52:04+02:00Petr Špačekfix locking around cache preallocation Caveats in [LMDB docs](http://www.lmdb.tech/doc/index.html) suggest that our cache preallocation might break LMDB locking:
> Do not have open an LMDB database twice in the same process at the same time. Not even from a plain open() ca... Caveats in [LMDB docs](http://www.lmdb.tech/doc/index.html) suggest that our cache preallocation might break LMDB locking:
> Do not have open an LMDB database twice in the same process at the same time. Not even from a plain open() call - close()ing it breaks flock() advisory locking.https://gitlab.nic.cz/knot/knot-resolver/-/issues/585[graphite] Prevents kresd to start if graphite server is not available2020-06-29T17:05:09+02:00Fre[graphite] Prevents kresd to start if graphite server is not availablekresd fails to start up when the graphite server is not available:
`kresd[2281]: [system] error while loading config: /usr/lib/knot-resolver/kres_modules/graphite.lua:102: socket:connect: No route to host (workdir '/var/lib/knot-resolve...kresd fails to start up when the graphite server is not available:
`kresd[2281]: [system] error while loading config: /usr/lib/knot-resolver/kres_modules/graphite.lua:102: socket:connect: No route to host (workdir '/var/lib/knot-resolver')`
This should be a warning, not a critical error preventing start up of kresd.
This is Knot Resolver 5.1.1 running on Debian Buster.https://gitlab.nic.cz/knot/knot-resolver/-/issues/586doc: sphinx warnings due to anonymous structs2020-10-07T11:07:43+02:00Tomas Krizekdoc: sphinx warnings due to anonymous structsCompiling the documentation with doxygen/breathe/sphinx started to generate the following warnings:
```
/home/tkrizek/git/knot-resolver/doc/lib.rst:23:Error in declarator or parameters
Invalid C declaration: Expected identifier in neste...Compiling the documentation with doxygen/breathe/sphinx started to generate the following warnings:
```
/home/tkrizek/git/knot-resolver/doc/lib.rst:23:Error in declarator or parameters
Invalid C declaration: Expected identifier in nested name. [error at 17]
struct kr_request::@6 qsource
-----------------^
```
This is caused by our anonymous structs which are used extensively in our code. Perhaps the best way to fix these warning would be to use regular named structs instead of anonymous ones.
So far, I've seen this issue on tumblweed and arch.
Related https://github.com/sphinx-doc/sphinx/issues/2683https://gitlab.nic.cz/knot/knot-resolver/-/issues/587knot-resolver in forwarding mode tries to incorrectly validate an insecure do...2020-07-23T08:56:43+02:00Matti Hiljanenknot-resolver in forwarding mode tries to incorrectly validate an insecure domain if zone contains rrsigsThe zone digitransit.fi contains expired RRSIGs for a wrong domain (droneinfo.fi). This makes knot-resolver *in forwarding mode* go bogus for digitransit.fi even though the zone itself should be insecure as no DS is published for it. In ...The zone digitransit.fi contains expired RRSIGs for a wrong domain (droneinfo.fi). This makes knot-resolver *in forwarding mode* go bogus for digitransit.fi even though the zone itself should be insecure as no DS is published for it. In non-forwarding mode it works correctly.
(This has been reported to the zone owner and it should be fixed at some point, however it should also work in knot as it is.)
Reproduced on 5.1.2:
```
verbose(true)
true
> policy.add(policy.all(policy.FORWARD({'8.8.8.8'})))
[cb] => function cb(_, _): 0x40459bb8
[count] => 0
[id] => 0
> [00000.00][plan] plan 'api.digitransit.fi.' type 'A' uid [53271.00]
[53271.00][iter] 'api.digitransit.fi.' type 'A' new uid was assigned .01, parent uid .00
[53271.01][cach] => trying zone: ., NSEC, hash 0
[53271.01][cach] => NSEC sname: range search miss (!covers)
[53271.01][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[53271.01][plan] plan '.' type 'DNSKEY' uid [53271.02]
[53271.02][iter] '.' type 'DNSKEY' new uid was assigned .03, parent uid .01
[53271.03][cach] => satisfied by exact RRset: rank 060, new TTL 172736
[53271.03][iter] <= rcode: NOERROR
[53271.03][vldr] <= parent: updating DNSKEY
[53271.03][vldr] <= answer valid, OK
[53271.01][iter] 'api.digitransit.fi.' type 'A' new uid was assigned .04, parent uid .00
[53271.04][plan] plan 'fi.' type 'DS' uid [53271.05]
[53271.05][iter] 'fi.' type 'DS' new uid was assigned .06, parent uid .04
[53271.06][cach] => trying zone: ., NSEC, hash 0
[53271.06][cach] => NSEC sname: range search miss (!covers)
[53271.06][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[ ][nsre] score 21 for 8.8.8.8#00053; cached RTT: -1
[53271.06][resl] => id: '27374' querying: '8.8.8.8#00053' score: 21 zone cut: '.' qname: 'Fi.' qtype: 'DS' proto: 'udp'
[53271.06][iter] <= rcode: NOERROR
[53271.06][vldr] <= DS: OK
[53271.06][vldr] <= parent: updating DS
[53271.06][vldr] <= answer valid, OK
[53271.06][cach] => stashed fi. DS, rank 060, 330 B total, incl. 1 RRSIGs
[53271.06][resl] <= server: '8.8.8.8' rtt: 16 ms
[53271.04][iter] 'api.digitransit.fi.' type 'A' new uid was assigned .07, parent uid .00
[53271.07][plan] plan 'fi.' type 'DNSKEY' uid [53271.08]
[53271.08][iter] 'fi.' type 'DNSKEY' new uid was assigned .09, parent uid .07
[53271.09][cach] => trying zone: ., NSEC, hash 0
[53271.09][cach] => NSEC sname: range search miss (!covers)
[53271.09][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[ ][nsre] score 21 for 8.8.8.8#00053; cached RTT: 16
[53271.09][resl] => id: '49150' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'fI.' qtype: 'DNSKEY' proto: 'udp'
[53271.09][iter] <= rcode: NOERROR
[53271.09][vldr] <= parent: updating DNSKEY
[53271.09][vldr] <= answer valid, OK
[53271.09][cach] => stashed fi. DNSKEY, rank 060, 826 B total, incl. 1 RRSIGs
[53271.09][resl] <= server: '8.8.8.8' rtt: 24 ms
[53271.07][iter] 'api.digitransit.fi.' type 'A' new uid was assigned .10, parent uid .00
[53271.10][plan] plan 'digitransit.fi.' type 'DS' uid [53271.11]
[53271.11][iter] 'digitransit.fi.' type 'DS' new uid was assigned .12, parent uid .10
[53271.12][cach] => trying zone: ., NSEC, hash 0
[53271.12][cach] => NSEC sname: range search miss (!covers)
[53271.12][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[ ][nsre] score 21 for 8.8.8.8#00053; cached RTT: 20
[53271.12][resl] => id: '02617' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'dIGitRANsIt.FI.' qtype: 'DS' proto: 'udp'
[53271.12][resl] => id: '02617' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'dIGitRANsIt.FI.' qtype: 'DS' proto: 'udp'
[53271.12][iter] <= rcode: NOERROR
[53271.12][vldr] <= can't prove NODATA due to optout, going insecure
[53271.12][vldr] <= DS doesn't exist, going insecure
[53271.12][vldr] <= parent: updating DS
[53271.12][vldr] <= answer valid, OK
[53271.12][cach] => stashed fi. SOA, rank 060, 348 B total, incl. 1 RRSIGs
[53271.12][cach] => stashed packet: rank 060, TTL 1799, DS digitransit.fi. (1160 B)
[53271.12][resl] <= server: '8.8.8.8' rtt: 40 ms
[53271.10][iter] 'api.digitransit.fi.' type 'A' new uid was assigned .13, parent uid .00
[53271.13][plan] plan 'digitransit.fi.' type 'NS' uid [53271.14]
[53271.14][iter] 'digitransit.fi.' type 'NS' new uid was assigned .15, parent uid .13
[53271.15][cach] => trying zone: ., NSEC, hash 0
[53271.15][cach] => NSEC sname: range search miss (!covers)
[53271.15][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[ ][nsre] score 21 for 8.8.8.8#00053; cached RTT: 30
[53271.15][resl] => id: '11470' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'diGITrAnSiT.fI.' qtype: 'NS' proto: 'udp'
[53271.15][resl] => id: '11470' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'diGITrAnSiT.fI.' qtype: 'NS' proto: 'udp'
[53271.15][iter] <= rcode: NOERROR
[53271.15][plan] plan 'digitransit.fi.' type 'DS' uid [53271.16]
[53271.15][vldr] >< cut changed, needs revalidation
[53271.15][plan] plan 'droneinfo.fi.' type 'DS' uid [53271.17]
[53271.15][resl] <= server: '8.8.8.8' rtt: 49 ms
[53271.17][iter] 'droneinfo.fi.' type 'DS' new uid was assigned .18, parent uid .15
[53271.18][cach] => trying zone: ., NSEC, hash 0
[53271.18][cach] => NSEC sname: range search miss (!covers)
[53271.18][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[ ][nsre] score 21 for 8.8.8.8#00053; cached RTT: 39
[53271.18][resl] => id: '45800' querying: '8.8.8.8#00053' score: 21 zone cut: 'fi.' qname: 'DRONeiNfo.Fi.' qtype: 'DS' proto: 'udp'
[53271.18][iter] <= rcode: NOERROR
[53271.18][vldr] <= DS: OK
[53271.18][vldr] <= parent: updating DS
[53271.18][vldr] <= answer valid, OK
[53271.18][cach] => stashed droneinfo.fi. DS, rank 060, 332 B total, incl. 1 RRSIGs
[53271.18][resl] <= server: '8.8.8.8' rtt: 28 ms
[53271.16][iter] 'digitransit.fi.' type 'DS' new uid was assigned .19, parent uid .15
[53271.19][cach] => satisfied by exact packet: rank 060, new TTL 1799
[53271.19][iter] <= rcode: NOERROR
[53271.19][vldr] <= DS doesn't exist, going insecure
[53271.19][vldr] <= parent: updating DS
[53271.19][vldr] <= answer valid, OK
[53271.15][resl] => resuming yielded answer
[53271.15][vldr] >< bogus signatures: digitransit.fi. NS (2 matching RRSIGs, 2 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[53271.15][vldr] >< cut changed (new signer), needs revalidation
[53271.15][resl] => resuming yielded answer
[53271.15][vldr] >< bogus signatures: digitransit.fi. NS (2 matching RRSIGs, 2 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[53271.15][vldr] >< cut changed (new signer), needs revalidation
[53271.15][resl] => resuming yielded answer
[53271.15][vldr] >< bogus signatures: digitransit.fi. NS (2 matching RRSIGs, 2 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[53271.15][vldr] <= continuous revalidation, fails
[53271.15][cach] => skipping bogus RR set NS
[53271.15][cach] => stashed packet: rank 025, TTL 3599, NS digitransit.fi. (471 B)
DNSSEC validation failure digitransit.fi. NS
[53271.15][resl] finished: 8, queries: 6, mempool: 82000 B
```Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/592can't build without systemd2020-07-27T12:49:37+02:00aliumcan't build without systemdHi,
thanks your your works! As i saw, systemd is still optional, but i can't build knot-resolver without systemd. On my system i use eudev and openrc, also use "-Dsystemd_files=disabled", but meson ended with error:
```
alois@pica...Hi,
thanks your your works! As i saw, systemd is still optional, but i can't build knot-resolver without systemd. On my system i use eudev and openrc, also use "-Dsystemd_files=disabled", but meson ended with error:
```
alois@picasso knot-resolver]$ meson build_arch \
--buildtype=release \
--prefix=/usr \
--sbindir=bin \
-Dkeyfile_default=/etc/trusted-key.key \
-Dsystemd_files=disabled \
-Dclient=enabled \
-Dinstall_kresd_conf=enabled \
-Dunit_tests=enabled
The Meson build system
Version: 0.54.3
Source dir: /tmp/makepkg/knot-resolver/src/knot-resolver
Build dir: /tmp/makepkg/knot-resolver/src/knot-resolver/build_arch
Build type: native build
Project name: knot-resolver
Project version: 5.1.2
C compiler for the host machine: cc (gcc 10.1.0 "cc (GCC) 10.1.0")
C linker for the host machine: cc ld.bfd 2.34.0
C++ compiler for the host machine: c++ (gcc 10.1.0 "c++ (GCC) 10.1.0")
C++ linker for the host machine: c++ ld.bfd 2.34.0
Host machine cpu family: x86_64
Host machine cpu: x86_64
Message: --- required dependencies ---
Found pkg-config: /bin/pkg-config (1.7.3)
Run-time dependency libknot found: YES 2.9.5
Run-time dependency libdnssec found: YES 2.9.5
Run-time dependency libzscanner found: YES 2.9.5
Run-time dependency libuv found: YES 1.38.1
Run-time dependency lmdb found: YES 0.9.25
Run-time dependency gnutls found: YES 3.6.14
Run-time dependency luajit found: YES 2.0.5
Message: ------------------------------
Message: --- optional dependencies ---
Run-time dependency openssl found: YES 1.1.1g
Checking for function "asprintf" : YES
Run-time dependency libcap-ng found: YES 0.7.10
Checking for function "sendmmsg" : YES
Run-time dependency libsystemd found: YES 243.7
Message: ---------------------------
meson.build:168:10: ERROR: String '243.7' cannot be converted to int
```
Any idea ?https://gitlab.nic.cz/knot/knot-resolver/-/issues/59464-bit ARM: our OBS packages2023-10-16T12:00:21+02:00Vladimír Čunátvladimir.cunat@nic.cz64-bit ARM: our OBS packagesAs uncovered in #593, there are still some issues. We'll most likely first fix them in our OBS packages and then try to fix in official Debian. This ticket shall track the progress.As uncovered in #593, there are still some issues. We'll most likely first fix them in our OBS packages and then try to fix in official Debian. This ticket shall track the progress.https://gitlab.nic.cz/knot/knot-resolver/-/issues/595garbage collector does not handle on-line cache resize2020-11-24T16:05:11+01:00Petr Špačekgarbage collector does not handle on-line cache resizeCache resize underneath running GC leads to fatal error:
```
Error starting DB transaction (MDB_MAP_RESIZED: Database contents grew beyond environment mapsize).
Error (MDB_MAP_RESIZED: Database contents grew beyond environment mapsize)
```Cache resize underneath running GC leads to fatal error:
```
Error starting DB transaction (MDB_MAP_RESIZED: Database contents grew beyond environment mapsize).
Error (MDB_MAP_RESIZED: Database contents grew beyond environment mapsize)
```Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/596Clarify contributing2020-08-21T12:09:40+02:00Florian KlinkClarify contributingIt seems forking of this repo is disabled, and PRs are closed. Is this indended?
If so, could you add a `CONTRIBUTING.md` explaining how to send contributions?It seems forking of this repo is disabled, and PRs are closed. Is this indended?
If so, could you add a `CONTRIBUTING.md` explaining how to send contributions?https://gitlab.nic.cz/knot/knot-resolver/-/issues/597garbage collector does not handle cache overflow2020-09-10T18:03:36+02:00Petr Špačekgarbage collector does not handle cache overflowGC exits if it fails to delete some records from cache _because the cache is overfull_. In other words the GC exits when resolver needs it most.
```
Usage: 95.80%
Cache analyzed in 0.01 secs, 7764 records, limit category is 54.
854 reco...GC exits if it fails to delete some records from cache _because the cache is overfull_. In other words the GC exits when resolver needs it most.
```
Usage: 95.80%
Cache analyzed in 0.01 secs, 7764 records, limit category is 54.
854 records to be deleted using 0.15 MBytes of temporary memory, 0 records skipped due to memory limit.
Warning: skipping deletion because of error (not enough space provided)
Warning: skipping deletion because of error (not enough space provided)
Warning: skipping deletion because of error (not enough space provided)
Warning: skipping deletion because of error (not enough space provided)
Warning: skipping deletion because of error (not enough space provided)
Warning: skipping deletion because of error (not enough space provided)
Error: transaction failed (not enough space provided)
Deleted 200 records (0 already gone) types TYPE29154 TYPE59527 TYPE44187 TYPE36047 TYPE45693 TYPE21714 TYPE50204 TYPE51332 TYPE44444 TYPE29269 TYPE3130 TYPE46908 TYPE42383 TYPE45769 TYPE44996 TYPE52982 TYPE3964 TYPE27428 TYPE48741 TYPE41612 TYPE6865 TYPE27634 TYPE11442 TYPE59684 TYPE7524 TYPE35361 TYPE54929 TYPE35156 TYPE41909 TYPE47722 TYPE39628 TYPE41358 TYPE34831 TYPE14502 TYPE44987 TYPE31613 TYPE54239 TYPE48620 TYPE38013 TYPE18764 TYPE14055 TYPE44216 TYPE59777 TYPE44082 TYPE54725 TYPE22458 TYPE24155 TYPE28478 TYPE54779 TYPE36220 TYPE51461 TYPE25536 TYPE52075 TYPE34900 TYPE56540 TYPE20833 TYPE53735 TYPE36127 TYPE1196 TYPE41734 TYPE52917 TYPE47446 TYPE2667 TYPE46684 TYPE53393 TYPE51980 TYPE48422 TYPE8606 TYPE60457 TYPE17578 TYPE42928 TYPE2558 TYPE50788 TYPE56583 TYPE53266 TYPE7786 TYPE23574 TYPE42124 TYPE30050 TYPE48447 TYPE2899 TYPE56431 TYPE2027 TYPE10014 TYPE30069 TYPE10495 TYPE8553 TYPE27614 TYPE30114 TYPE1749 TYPE30103 TYPE39247 TYPE52317 TYPE12223 TYPE15458 TYPE29030 TYPE14759 TYPE18893 TYPE54959 TYPE23394 TYPE34964 TYPE50367 TYPE49032 TYPE3520 TYPE47228 TYPE45727 TYPE53351 TYPE10951 TYPE48483 TYPE55134 TYPE15948 TYPE11818 TYPE41057 TYPE27592 TYPE39439 TYPE44299 TYPE20265 TYPE37406 TYPE49793 TYPE37190 TYPE34190 TYPE52182 TYPE51724 TYPE37423 TYPE40471 TYPE33384 TYPE26887 TYPE24555 TYPE9772 TYPE40292 TYPE1881 TYPE28454 TYPE16893 TYPE12828 TYPE35800 TYPE48615 TYPE23795 TYPE17868 TYPE52707 TYPE29353 TYPE15356 TYPE17423 TYPE43681 TYPE29103 TYPE6193 TYPE10192 TYPE1533 TYPE52733 TYPE25324 TYPE18661 TYPE8028 TYPE15130 TYPE1390 TYPE20894 TYPE46928 TYPE20775 TYPE34785 TYPE35033 TYPE25865 TYPE29467 TYPE35999 TYPE19689 TYPE36486 TYPE26812 TYPE47835 TYPE12544 TYPE59178 TYPE40217 TYPE48360 TYPE6498 TYPE27278 TYPE20611 TYPE33166 TYPE26178 TYPE29607 TYPE41861 TYPE46627 TYPE46523 TYPE39303 TYPE31506 TYPE29641 TYPE57230 TYPE45646 TYPE18200 TYPE15259 TYPE9469 TYPE38975 TYPE35866 TYPE56761 TYPE7671 TYPE2275 TYPE8828 TYPE38107 TYPE40836 TYPE28134 TYPE46671 TYPE32355 TYPE38288 TYPE42284 TYPE26703
It took 0.00 secs, 1 transactions (not enough space provided)
Error (not enough space provided)
```
Version: kresd 5.1.2Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/600DoH server rewrite2020-10-14T11:12:53+02:00Petr ŠpačekDoH server rewriteVersion: 5.1.2
Problem: Current DoH server (based on lua-http) is slow and very hard to debug (e.g. #465).
Proposed approach: Complete rewrite using a better HTTP library.
Considerations:
- [x] drop HTTP1 support, https://tools.ietf.o...Version: 5.1.2
Problem: Current DoH server (based on lua-http) is slow and very hard to debug (e.g. #465).
Proposed approach: Complete rewrite using a better HTTP library.
Considerations:
- [x] drop HTTP1 support, https://tools.ietf.org/html/rfc8484 recommends HTTP2 anyway:
> HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH
- [x] drop support for insecure transport (HTTP-only): Insecure transport complicates design and implementation and has unclear benefits. Let's not implement it before there is clear use-case for it.
- [x] ~~design a extensibility mechanism for kresd modules [#616]~~Tomas KrizekTomas Krizekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/601documentation omits how to make examples work if installed lua is not 5.12020-09-07T16:53:50+02:00tobiwwdocumentation omits how to make examples work if installed lua is not 5.1As a new knot-resolver user, I had the darndest time following examples in the documentation. For example, adding 'http' to the modules list gave a non-helpful syntax error. Tried installing lua-http from the os package manager, but that...As a new knot-resolver user, I had the darndest time following examples in the documentation. For example, adding 'http' to the modules list gave a non-helpful syntax error. Tried installing lua-http from the os package manager, but that didn't help. The root cause of the problem was that I wasn't installing the right libraries for the right lua version. What I ended up doing was uninstalling all versions of lua on my system (including things that depended on lua, such as neovim and vs code), and then installing lua (the default 5.4), 'lua51' and luarocks. Then I ran `luarocks --lua-version 5.1 install http` and the http module worked with knot-resolver!
I propose adding some links in the documentation to describe how to check if you have the right lua version and how to install necessary add-on modules. Since you can land on any of the doc pages from a google search, I suggest that there is a link to the module-install info from every page where there are examples that depend on non-standard modules or libraries.
I use arch linux, which installs lua 5.4 by default (as of Aug 2020). If you install luarocks and run the install command above without lua51 installed, you get a error that lua.h header file is missing. That can be fixed by installing the lua51 package. I understand why lua51 isn't a dependency of knot-resolver, since it's built-in, but the docs should include all these steps needed to install the modules used in the example code.
knot-resolver is really a really amazing package, and for some, it may be the inspiration to learn enough lua to do further automation with it. I'm not suggesting that you duplicate all the lua programming guides, but it might be helpful to have a non-lua programmer do a review pass on the documentation to help point out things that are non-obvious and could benefit with clarification or explanations of the syntax.https://gitlab.nic.cz/knot/knot-resolver/-/issues/607qname minimisation towards a forward that also uses qname minimisation2020-09-04T14:59:35+02:00David Plassmannqname minimisation towards a forward that also uses qname minimisationI use knot-resolver on my turris omnia and used to (tls)forward all queries to cloudflare. Now i encountered a hostname `if63n.sitelockcdn.net.` where the nameserver claims not to be the authoritative:
```
dig NS sitelockcdn.net. @ns1.in...I use knot-resolver on my turris omnia and used to (tls)forward all queries to cloudflare. Now i encountered a hostname `if63n.sitelockcdn.net.` where the nameserver claims not to be the authoritative:
```
dig NS sitelockcdn.net. @ns1.incapdns.net
; <<>> DiG 9.16.6 <<>> NS sitelockcdn.net. @ns1.incapdns.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTAUTH, id: 34638
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;sitelockcdn.net. IN NS
;; Query time: 0 msec
```
When asking kresd the same question one gets a `SERVFAIL` as expected. However with the forward in place kresd will get the `SERVFAIL` from cloudflare and return the `SERVFAIL` to the client. For the logs please see here:
[nowork.txt](/uploads/e6dbb8d6bf49e258b368383fb23d7c32/nowork.txt)
And logs for when not using a forward.
[works.txt](/uploads/7fd90e02857bdb765a73a1521d9b2f6b/works.txt)
I'm not sure what the solution should be. I would guess adding an option to disable qname minimisation for forwards would be one way to solve it. However someone more familiar with the inner workings of knot-resolver might have a better idea.https://gitlab.nic.cz/knot/knot-resolver/-/issues/608Debian 9 OBS key has expired2020-09-11T10:19:49+02:00ValdikSSDebian 9 OBS key has expiredThe package signing key used for Debian 9 packages has expired. Please update it.
```
Get:5 http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_9.0 InRelease [1,588 B]
Err:5 http://download.opensuse.org/...The package signing key used for Debian 9 packages has expired. Please update it.
```
Get:5 http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_9.0 InRelease [1,588 B]
Err:5 http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/Debian_9.0 InRelease
The following signatures were invalid: EXPKEYSIG 74062DB36A1F4009 home:CZ-NIC OBS Project <home:CZ-NIC@build.opensuse.org>
```
```
$ gpg --search 74062DB36A1F4009
gpg: data source: http://hkps.pool.sks-keyservers.net:11371
(1) home:CZ-NIC OBS Project <home:CZ-NIC@build.opensuse.org>
2048 bit RSA key 74062DB36A1F4009, created: 2018-02-15, expires: 2020-04-25 (expired)
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/610migrate upstream repositories from OBS2024-01-19T17:08:29+01:00Tomas Krizekmigrate upstream repositories from OBSThe OBS infrastructure has some serious issues, some of which are security related.
The mirrors can get weirdly out of sync, which can cause a different file size / checksums in downloaded repository metadata (`Packages` file for debian...The OBS infrastructure has some serious issues, some of which are security related.
The mirrors can get weirdly out of sync, which can cause a different file size / checksums in downloaded repository metadata (`Packages` file for debian) and the downloaded package. This issue has been observed by our users.
The packages are also downloaded over http, because not all the mirrors support https. Users have complained about this on the [mailing list](https://lists.nic.cz/pipermail/knot-resolver-users/2019/000193.html).
Overall, OBS may be suitable for testing and automation, but the official upstream packages should be somewhere more reliable. I propose to use the same approach as [Knot DNS](https://www.knot-dns.cz/download/) to be more consistent.
Features we want:
- supported distributions
- Debian (9), 10+
- Ubuntu (16.04), 18.04, 20.04, latest rolling?
- Fedora - all supported
- CentOS 7, 8
- openSUSE - Leap 15.x
- Arch is a bonus
- supported architectures
- x86_64
- aarch64 ?
- armv7 ?
- control over build root dependencies (e.g. using a newer/older Knot DNS)
- possibility to use multiple repositories (latest, testing, ...)
- re-builds if distribution packages/dependencies change?
- non-public repositories for security releases for customers?Jakub RužičkaJakub Ružička