Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2020-02-05T14:57:09+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/511Add VRF support2020-02-05T14:57:09+01:00krombelAdd VRF supportI try to run knot-resolver in a vrf.
I want the /metric endpoint to be accessible only internally but resolving DoH/DoT over a vrf-interface which is meant to be for external requests.I try to run knot-resolver in a vrf.
I want the /metric endpoint to be accessible only internally but resolving DoH/DoT over a vrf-interface which is meant to be for external requests.https://gitlab.nic.cz/knot/knot-resolver/-/issues/494support running behind NAT64?2019-07-31T13:35:41+02:00Vladimír Čunátvladimir.cunat@nic.czsupport running behind NAT64?Minor use case: _running_ kresd on a machine without native IPv4. (maybe)
While DNS servers tend to have much higher rate of IPv6 support than (say) HTTP servers, there are still problems, e.g. [Fastly CDN](https://fastly.net) is a long...Minor use case: _running_ kresd on a machine without native IPv4. (maybe)
While DNS servers tend to have much higher rate of IPv6 support than (say) HTTP servers, there are still problems, e.g. [Fastly CDN](https://fastly.net) is a long-standing example that doesn't have any IPv6 glue.
- - -
So far one wants at least improve NS selection algorithm by:
```lua
net.ipv4 = false
```
or work around the problem by forwarding to someplace with native IPv4.https://gitlab.nic.cz/knot/knot-resolver/-/issues/488can't reliably fetch stats when using SO_REUSEPORT2020-06-15T09:35:13+02:00Jean-Danielcan't reliably fetch stats when using SO_REUSEPORTI'm using knot resolver with systemd, and want to use the stats module + http module to fetch stats in prometheus format.
My problem is that if I start more that one instance (kresd@1, kresd@2, …), stats fetching requests are distribute...I'm using knot resolver with systemd, and want to use the stats module + http module to fetch stats in prometheus format.
My problem is that if I start more that one instance (kresd@1, kresd@2, …), stats fetching requests are distributed among the instances and returns only the stats from the answering instance.
I can't get a reliable way to fetch the stats in such configuration.
Workaround:
I can fetch and aggregate individual workers stats from the controls sockets, but the control socket is very unreliable (it is not able to properly parse 2 successives queries properly and often try to interpret them as a single query).https://gitlab.nic.cz/knot/knot-resolver/-/issues/483DNS64 does not synthesise if AAAA query fails but A query works2019-12-18T19:56:41+01:00Petr ŠpačekDNS64 does not synthesise if AAAA query fails but A query worksQuery for `internetbanken.privat.nordea.se. AAAA` ends up with SERVFAIL because it is broken on the authoritative side, but query `internetbanken.privat.nordea.se. A` succeeds.
https://tools.ietf.org/html/rfc6147#section-5.1.2 seems to ...Query for `internetbanken.privat.nordea.se. AAAA` ends up with SERVFAIL because it is broken on the authoritative side, but query `internetbanken.privat.nordea.se. A` succeeds.
https://tools.ietf.org/html/rfc6147#section-5.1.2 seems to specify (using pretty convoluted language), that any failure in AAAA resolving should trigger A subquery and DNS64 synthesis.
This was reported during RIPE 78 meeting because some people were not able to reach their bank website.
I can see two problems with current DNS64 module (as in Knot Resolver 4.0.0):
- Failed AAAA query does not trigger synthesis, e.g. if we get SERVFAIL. This should be easy to fix.
- AAAA query which fails because of all NS servers do not respond for AAAA query will not call `consume()` layer in module, and thus DNS64 module does not get a chance to do A query and synthesis. This will be harder to fix.https://gitlab.nic.cz/knot/knot-resolver/-/issues/481FORWARD/TLS_FORWARD: support forwarding to hostname, DANE2019-12-18T19:15:02+01:00Tomas KrizekFORWARD/TLS_FORWARD: support forwarding to hostname, DANEThe `FORWARD` / `TLS_FORWARD` policies currently require an IP address as a target. Instead, a hostname could be provided. However, the initial bootstrap + handling TTL could be quite complex.
If the bootstrap + TTL problem would be sol...The `FORWARD` / `TLS_FORWARD` policies currently require an IP address as a target. Instead, a hostname could be provided. However, the initial bootstrap + handling TTL could be quite complex.
If the bootstrap + TTL problem would be solved, `TLS_FORWARD` could also support DANE [RFC8310#section8.2](https://tools.ietf.org/html/rfc8310#section-8.2)https://gitlab.nic.cz/knot/knot-resolver/-/issues/479NOERROR from pre-RFC 2308 servers is treated as lame2019-05-23T16:40:41+02:00Petr ŠpačekNOERROR from pre-RFC 2308 servers is treated as lameKnot Resolver 4.0.0 does not accept NOERROR answers from pre-RFC 2308 auths, i.e. auths which do not send SOA RR in AUTHORITY section of NOERROR answer.
Example from live Internet:
```
resolve('blogs.cisco.com', kres.type.AAAA, kres.c...Knot Resolver 4.0.0 does not accept NOERROR answers from pre-RFC 2308 auths, i.e. auths which do not send SOA RR in AUTHORITY section of NOERROR answer.
Example from live Internet:
```
resolve('blogs.cisco.com', kres.type.AAAA, kres.class.IN, {}, function(pkt) print(pkt) end)
```
...
```
[65537.22][iter] 'blogs.glb-ext.cisco.com.' type 'AAAA' new uid was assigned .25, parent uid .00
[65537.25][resl] => id: '43849' querying: '72.163.5.22#00053' score: 10 zone cut: 'glb-ext.cisco.com.' qname: 'BLogS.glb-eXT.CiscO.Com.' qtype: 'AAAA' proto: 'udp'
[65537.25][iter] <= answer received:
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 43849
;; Flags: qr cd QUERY: 1; ANSWER: 0; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: Unused
;; QUESTION SECTION
blogs.glb-ext.cisco.com. AAAA
[65537.25][iter] <= rcode: NOERROR
[65537.25][iter] <= lame response: non-auth sent negative response
```
This seems to be caused by `is_authoritative()` in lib/layer/iterate.c.https://gitlab.nic.cz/knot/knot-resolver/-/issues/475daemon: support AF_UNIX for Do53 and DoT sockets?2020-11-24T16:29:44+01:00Vladimír Čunátvladimir.cunat@nic.czdaemon: support AF_UNIX for Do53 and DoT sockets?I split that away from [AF_UNIX for the other sockets](https://gitlab.labs.nic.cz/knot/knot-resolver/merge_requests/811), because I saw some assumptions in worker and/or session code, and there's been no demand so far. In particular, a ...I split that away from [AF_UNIX for the other sockets](https://gitlab.labs.nic.cz/knot/knot-resolver/merge_requests/811), because I saw some assumptions in worker and/or session code, and there's been no demand so far. In particular, a different libuv handle type would have to be used for AF_UNIX (third case added to UDP and TCP).https://gitlab.nic.cz/knot/knot-resolver/-/issues/473validate: NSEC proofs can confuse NXDOMAIN with NODATA2019-04-30T12:39:49+02:00Vladimír Čunátvladimir.cunat@nic.czvalidate: NSEC proofs can confuse NXDOMAIN with NODATA[Real-life example](https://gitlab.labs.nic.cz/knot/knot-resolver/issues/462#note_104852).
The records get into aggressive cache that doesn't suffer from this bug, so only the first answer can be wrong. So far I can see no security imp...[Real-life example](https://gitlab.labs.nic.cz/knot/knot-resolver/issues/462#note_104852).
The records get into aggressive cache that doesn't suffer from this bug, so only the first answer can be wrong. So far I can see no security implications of exchanging NODATA with NXDOMAIN.https://gitlab.nic.cz/knot/knot-resolver/-/issues/471FORMERR for bad packets2020-10-02T11:06:36+02:00Vladimír Čunátvladimir.cunat@nic.czFORMERR for bad packetsCurrently a request from client is either accepted or _ignored_. We should return `FORMERR` for packets where header looks like DNS.Currently a request from client is either accepted or _ignored_. We should return `FORMERR` for packets where header looks like DNS.https://gitlab.nic.cz/knot/knot-resolver/-/issues/459maintenance daemon2022-05-08T12:13:54+02:00Petr Špačekmaintenance daemonKnot Resolver has bunch of tasks which need to be done only once, so it does not make much sense to do them from all workers independently.
Examples:
- [x] cache cleanup - #257
- [ ] cache import - `zimport` into cache
- [ ] TLS certif...Knot Resolver has bunch of tasks which need to be done only once, so it does not make much sense to do them from all workers independently.
Examples:
- [x] cache cleanup - #257
- [ ] cache import - `zimport` into cache
- [ ] TLS certificate maintenance (DNS-over-TLS, HTTP module)
- [ ] TLS ticket rotation
- [ ] RFC 5011
- [ ] TA bootstrap
... and possibly others.
In long term we might create a "maintenance" daemon which could take care of these tasks so they would not block worker threads (it would also avoid duplication of tasks).
This would require means to communicate between maintenance daemon and workers.https://gitlab.nic.cz/knot/knot-resolver/-/issues/455ugly uv_foo_t * casts all over the place2019-03-12T12:47:18+01:00Vladimír Čunátvladimir.cunat@nic.czugly uv_foo_t * casts all over the placeThe following discussion from !786 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.labs.nic.cz/knot/knot-resolver/merge_requests/786#note_100831): (+2 comments)
> Wondering out loud: Would it be nicer if ...The following discussion from !786 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.labs.nic.cz/knot/knot-resolver/merge_requests/786#note_100831): (+2 comments)
> Wondering out loud: Would it be nicer if we used union for this? That would avoid explicit retyping all over the place ...https://gitlab.nic.cz/knot/knot-resolver/-/issues/446test huge pages2019-02-06T17:37:48+01:00Petr Špačektest huge pagesWe might test if some variant of huge pages can help with performance ... it is of uncertainly value but it is one more idea we can test during benchmarking.
See https://fosdem.org/2019/schedule/event/hugepages_databases/We might test if some variant of huge pages can help with performance ... it is of uncertainly value but it is one more idea we can test during benchmarking.
See https://fosdem.org/2019/schedule/event/hugepages_databases/https://gitlab.nic.cz/knot/knot-resolver/-/issues/433DNSSEC validation failing for empty subsubdomain2021-12-13T14:29:10+01:00Ivana KrumlovaDNSSEC validation failing for empty subsubdomaintest: [val_anchor_nx.rpl](/uploads/48b7d6e4bd7cea788a7497622812280e/val_anchor_nx.rpl)
zone:[example.com.zone.signed](/uploads/7d89cae3239747a49b306a182ef80531/example.com.zone.signed)
log: [server.log](/uploads/947010f4266c53e8cd6940b...test: [val_anchor_nx.rpl](/uploads/48b7d6e4bd7cea788a7497622812280e/val_anchor_nx.rpl)
zone:[example.com.zone.signed](/uploads/7d89cae3239747a49b306a182ef80531/example.com.zone.signed)
log: [server.log](/uploads/947010f4266c53e8cd6940b3441e30bf/server.log)Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/430"=> going insecure because there's no covering TA" message2018-12-14T12:48:27+01:00Ivana Krumlova"=> going insecure because there's no covering TA" messageDeckard often prints this at the beginning of the log, even on tests where data are DNSSEC-validated correctly.
Maybe this is a problem in kresd logging or something like that.
for example:
log:
```deckard.py 364 DEBUG...Deckard often prints this at the beginning of the log, even on tests where data are DNSSEC-validated correctly.
Maybe this is a problem in kresd logging or something like that.
for example:
log:
```deckard.py 364 DEBUG [00000.00][plan] plan 'b.example.com.' type 'DS' uid [36622.00]
deckard.py 364 DEBUG [36622.00][iter] 'b.example.com.' type 'DS' new uid was assigned .01, parent uid .00
deckard.py 364 DEBUG [36622.01][resl] => going insecure because there's no covering TA
deckard.py 364 DEBUG [36622.01][resl] => using root hints
deckard.py 364 DEBUG [36622.01][iter] 'b.example.com.' type 'DS' new uid was assigned .02, parent uid .00
deckard.py 364 DEBUG [36622.02][resl] => id: '50568' querying: '193.0.14.129' score: 10 zone cut: '.' qname: 'b.EXampLe.COm.' qtype: 'DS' proto: 'udp'
deckard.py 364 DEBUG [36622.02][iter] <= answer received:
deckard.py 364 DEBUG ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 50568
deckard.py 364 DEBUG ;; Flags: qr QUERY: 1; ANSWER: 0; AUTHORITY: 1; ADDITIONAL: 2
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; EDNS PSEUDOSECTION:
deckard.py 364 DEBUG ;; Version: 0; flags: ; UDP size: 1280 B; ext-rcode: Unused
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; QUESTION SECTION
deckard.py 364 DEBUG b.example.com. DS
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; AUTHORITY SECTION
deckard.py 364 DEBUG com. 3600 NS a.gtld-servers.net.
deckard.py 364 DEBUG
deckard.py 364 DEBUG [36622.02][iter] <= loaded 1 glue addresses
deckard.py 364 DEBUG [36622.02][iter] <= referral response, follow
deckard.py 364 DEBUG [36622.02][cach] => stashed com. NS, rank 002, 36 B total, incl. 0 RRSIGs
deckard.py 364 DEBUG [36622.02][cach] => stashed also 1 nonauth RRsets
deckard.py 364 DEBUG [36622.02][resl] <= server: '193.0.14.129' rtt: 103 ms
deckard.py 364 DEBUG [36622.02][iter] 'b.example.com.' type 'DS' new uid was assigned .03, parent uid .00
deckard.py 364 DEBUG [36622.03][resl] => id: '52885' querying: '192.5.6.30' score: 10 zone cut: 'com.' qname: 'b.EXampLe.CoM.' qtype: 'DS' proto: 'udp'
deckard.py 364 DEBUG [36622.03][iter] <= answer received:
deckard.py 364 DEBUG ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 52885
deckard.py 364 DEBUG ;; Flags: qr QUERY: 1; ANSWER: 0; AUTHORITY: 1; ADDITIONAL: 2
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; EDNS PSEUDOSECTION:
deckard.py 364 DEBUG ;; Version: 0; flags: ; UDP size: 1280 B; ext-rcode: Unused
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; QUESTION SECTION
deckard.py 364 DEBUG b.example.com. DS
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; AUTHORITY SECTION
deckard.py 364 DEBUG example.com. 3600 NS ns.example.com.
deckard.py 364 DEBUG
deckard.py 364 DEBUG [36622.03][iter] <= loaded 1 glue addresses
deckard.py 364 DEBUG [36622.03][iter] <= referral response, follow
deckard.py 364 DEBUG [36622.03][cach] => stashed example.com. NS, rank 002, 32 B total, incl. 0 RRSIGs
deckard.py 364 DEBUG [36622.03][cach] => stashed also 1 nonauth RRsets
deckard.py 364 DEBUG [36622.03][resl] <= server: '192.5.6.30' rtt: 5 ms
deckard.py 364 DEBUG [36622.03][iter] 'b.example.com.' type 'DS' new uid was assigned .04, parent uid .00
deckard.py 364 DEBUG [36622.04][resl] >< TA: 'example.com.'
deckard.py 364 DEBUG [36622.04][plan] plan 'example.com.' type 'DNSKEY' uid [36622.05]
deckard.py 364 DEBUG [36622.05][iter] 'example.com.' type 'DNSKEY' new uid was assigned .06, parent uid .04
deckard.py 364 DEBUG [36622.06][cach] => no NSEC* cached for zone: example.com.
deckard.py 364 DEBUG [36622.06][cach] => skipping zone: example.com., NSEC, hash 0;new TTL -123456789, ret -2
deckard.py 364 DEBUG [36622.06][cach] => skipping zone: example.com., NSEC, hash 0;new TTL -123456789, ret -2
deckard.py 364 DEBUG [36622.06][resl] => id: '19571' querying: '1.2.3.4' score: 10 zone cut: 'example.com.' qname: 'EXaMPlE.Com.' qtype: 'DNSKEY' proto: 'udp'
deckard.py 364 DEBUG [36622.06][iter] <= answer received:
deckard.py 364 DEBUG ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 19571
deckard.py 364 DEBUG ;; Flags: qr QUERY: 1; ANSWER: 2; AUTHORITY: 2; ADDITIONAL: 3
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; EDNS PSEUDOSECTION:
deckard.py 364 DEBUG ;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: Unused
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; QUESTION SECTION
deckard.py 364 DEBUG example.com. DNSKEY
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; ANSWER SECTION
deckard.py 364 DEBUG example.com. 3600 DNSKEY 256 3 7 AwEAAef0Gt81KzrbFGbFmk6VeEzLLcRbnKiDjdMBO7R+HsQWCO9YpPGx20mBEV7ISCLva+LZulf584i30ga7qMeVsarsdh9xCYtyMXd4Ex5nMEXxV9f2Or+FjihPduL2TnAlWpvL8oc1oKVI2RISTT1yf8IYy6X/FpfmMP819WBN2Kit
deckard.py 364 DEBUG example.com. 3600 RRSIG DNSKEY 7 2 3600 20181230101851 20181130101851 16907 example.com. RPXAcaVjBdtk/geHTdTg9ZOKREpAdjZAopRE/5Kk9fdFYQWwg0uRxexLPJ11jXjnp9MKOp1FehctyvE/mm1lB/J6+YepHu3tRAzzJ9YfjVxJjUppQv/nA/fU55MHWYhdhXwKn7F+PXD8+MFlAqPyFz9mYZEO89lI4P2/Wf4xpv4=
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; AUTHORITY SECTION
deckard.py 364 DEBUG example.com. 3600 NS ns.example.com.
deckard.py 364 DEBUG example.com. 3600 RRSIG NS 7 2 3600 20181230101851 20181130101851 16907 example.com. KXsKhCme80OQl4qekE+q0KvymkhEelk+OdOsajCsGmfG5eeCEkN58gVw5fBgtR2Ekp15KLsV1elsyVL8i7W5Hp5f2G70/plqSQ+78n3Al5jXONgNoVFSOuf8N179F2uf3k20MpnlxQQ7W/VX6SpuAOejyVpp6il6dm2YwRHHnX4=
deckard.py 364 DEBUG
deckard.py 364 DEBUG [36622.06][iter] <= loaded 1 glue addresses
deckard.py 364 DEBUG [36622.06][iter] <= rcode: NOERROR
deckard.py 364 DEBUG [36622.06][vldr] <= parent: updating DNSKEY
deckard.py 364 DEBUG [36622.06][vldr] <= answer valid, OK
deckard.py 364 DEBUG [36622.06][cach] => stashed example.com. DNSKEY, rank 060, 314 B total, incl. 1 RRSIGs
deckard.py 364 DEBUG [36622.06][cach] => not overwriting A ns.example.com.
deckard.py 364 DEBUG [36622.06][resl] <= server: '1.2.3.4' rtt: 7 ms
deckard.py 364 DEBUG [36622.04][iter] 'b.example.com.' type 'DS' new uid was assigned .07, parent uid .00
deckard.py 364 DEBUG [36622.07][resl] => id: '04066' querying: '1.2.3.4' score: 11 zone cut: 'example.com.' qname: 'b.EXAmPLE.cOM.' qtype: 'DS' proto: 'udp'
deckard.py 364 DEBUG [36622.07][iter] <= answer received:
deckard.py 364 DEBUG ;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 4066
deckard.py 364 DEBUG ;; Flags: qr aa QUERY: 1; ANSWER: 0; AUTHORITY: 4; ADDITIONAL: 1
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; EDNS PSEUDOSECTION:
deckard.py 364 DEBUG ;; Version: 0; flags: do; UDP size: 1280 B; ext-rcode: Unused
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; QUESTION SECTION
deckard.py 364 DEBUG b.example.com. DS
deckard.py 364 DEBUG
deckard.py 364 DEBUG ;; AUTHORITY SECTION
deckard.py 364 DEBUG example.com. 86394 SOA ns.iana.org. nstld.iana.org. 2007092000 1800 900 604800 86400
deckard.py 364 DEBUG example.com. 86394 RRSIG SOA 7 2 86394 20181230101851 20181130101851 16907 example.com. uQjgfvlcxQLPfqetqWjTgKTbDOK3BoqbdmrqudrEl/X/S3OR8uhTQu7PEsrJm7IP7lmKcsbF4LAFjBNRp28G4at8v5cnCpvZfKFDzO3JzCubaVnn18rSZj9gM1e4CN5ms/aAlr5I2hDhIQnsKmhxQBTrngyTcpGgf/YQuruMRKw=
deckard.py 364 DEBUG *.example.com. 3600 NSEC *.b.example.com. A MX RRSIG NSEC
deckard.py 364 DEBUG *.example.com. 86400 RRSIG NSEC 7 2 86400 20181230101851 20181130101851 16907 example.com. 5NyjMTv7p0jvYrfxQzTJXvTlf1Uy2tMSmYKEWZoBq87u6mLNBtRgpKl91gpVvT8o+uA2XAznujnFZYgLdE9Swk87KqQQSWkyM81458SuSVwB5hma9afCrB38FH9D9aOCN1nfqIuoEsQi3Bu3Uvtr+eV7oE97ViROSy/1pyyKg9A=
deckard.py 364 DEBUG
deckard.py 364 DEBUG [36622.07][iter] <= rcode: NOERROR
deckard.py 364 DEBUG [36622.07][vldr] <= DS doesn't exist, going insecure
deckard.py 364 DEBUG [36622.07][vldr] <= answer valid, OK
deckard.py 364 DEBUG [36622.07][cach] => stashed *.example.com. NSEC, rank 060, 204 B total, incl. 1 RRSIGs
deckard.py 364 DEBUG [36622.07][cach] => stashed example.com. SOA, rank 060, 228 B total, incl. 1 RRSIGs
deckard.py 364 DEBUG [36622.07][cach] => nsec_p stashed for example.com. (new, hash: 0)
deckard.py 364 DEBUG [36622.07][resl] <= server: '1.2.3.4' rtt: 7 ms
deckard.py 364 DEBUG [36622.07][resl] AD: request classified as SECURE
deckard.py 364 DEBUG [36622.07][resl] finished: 4, queries: 2, mempool: 16400 B
scenario.py 536 INFO [ RANGE 0-100 ] {'192.5.6.30'} received: 1 sent: 1
scenario.py 536 INFO [ RANGE 0-100 ] {'193.0.14.129'} received: 1 sent: 1
scenario.py 536 INFO [ RANGE 0-100 ] {'1.2.3.4'} received: 2 sent: 2
. [100%]
1 passed, 1 skipped in 1.32 seconds```
from test [val_mal_wc.rpl](https://gitlab.labs.nic.cz/knot/deckard/blob/master/sets/resolver/val_mal_wc.rpl)https://gitlab.nic.cz/knot/knot-resolver/-/issues/429negative trust anchor does not prevent NXDOMAIN from aggressive cache2020-04-06T09:52:56+02:00Petr Špačeknegative trust anchor does not prevent NXDOMAIN from aggressive cacheRight now aggressive cache masks "grafted" domains, e.g. fake TLDs, even if these are listed as negative trust anchors.
This is unexpected behavior and forces users to use `NO_CACHE` which is not optimal. In future we should exempt NTAs...Right now aggressive cache masks "grafted" domains, e.g. fake TLDs, even if these are listed as negative trust anchors.
This is unexpected behavior and forces users to use `NO_CACHE` which is not optimal. In future we should exempt NTAs from aggressive cache.https://gitlab.nic.cz/knot/knot-resolver/-/issues/425Too many requests for DNSKEY2018-11-29T17:36:42+01:00Ivana KrumlovaToo many requests for DNSKEYwhen it uses unsupported algorithm (DSA).
Happens on this rpl test:
[val_noadwhennodo.rpl](/uploads/e3e52c6d62772621faa8047cd247ea00/val_noadwhennodo.rpl)
Server log:
[server.log](/uploads/27aff279562f79da370b5b2de67a1d5d/server.log)when it uses unsupported algorithm (DSA).
Happens on this rpl test:
[val_noadwhennodo.rpl](/uploads/e3e52c6d62772621faa8047cd247ea00/val_noadwhennodo.rpl)
Server log:
[server.log](/uploads/27aff279562f79da370b5b2de67a1d5d/server.log)https://gitlab.nic.cz/knot/knot-resolver/-/issues/417support prefilling for arbitrary zone2024-02-28T12:12:23+01:00Petr Špačeksupport prefilling for arbitrary zoneUlrich from IIS requested feature which would allow them to prefill resolver's cache with arbitrary zone, i.e. not only root zone.
Technical note:
Simple removal of checks for zone name does not work because `DS` records are missing in ...Ulrich from IIS requested feature which would allow them to prefill resolver's cache with arbitrary zone, i.e. not only root zone.
Technical note:
Simple removal of checks for zone name does not work because `DS` records are missing in cache and this lead to failing validation. Maybe we can just wrap import in a function which requests `DS` and calls import from query callback?https://gitlab.nic.cz/knot/knot-resolver/-/issues/405Improving TCP/TLS timer logic for long-lived connections2018-10-31T15:51:37+01:00BaptisteImproving TCP/TLS timer logic for long-lived connectionsI am testing long-lived client connections to Knot resolver over TCP or TLS.
Currently, the idle timeout is quite short: `kresd` closes a client TCP connection after just a few seconds when no request is made. While investigating this p...I am testing long-lived client connections to Knot resolver over TCP or TLS.
Currently, the idle timeout is quite short: `kresd` closes a client TCP connection after just a few seconds when no request is made. While investigating this part of the code, I found that the idle timeout strategy is quite complex, and mixes up the timeout values for "downstream" TCP connections and "upstream" TCP connections (while in reality, they have very different requirements).
Below is an attempt at documenting the current behaviour, so that we can discuss how to improve it.
This is related to #311 (short idle timeout for outgoing TLS connections) and #378 ("unificate processing of inbound and outbound TCP connections where it possible")https://gitlab.nic.cz/knot/knot-resolver/-/issues/404incorrect handling of EDNS version 1+2019-07-09T17:12:25+02:00Petr Špačekincorrect handling of EDNS version 1+Apparently we do not return BADVERS as we should:
```
$ dig +nocookie +rec +noad +edns=1 +noednsneg +ednsopt=100 soa isc.org. @1.1.1.1
; <<>> DiG 9.13.0-dev <<>> +nocookie +rec +noad +edns=1 +noednsneg +ednsopt=100 soa isc.org. @1.1.1....Apparently we do not return BADVERS as we should:
```
$ dig +nocookie +rec +noad +edns=1 +noednsneg +ednsopt=100 soa isc.org. @1.1.1.1
; <<>> DiG 9.13.0-dev <<>> +nocookie +rec +noad +edns=1 +noednsneg +ednsopt=100 soa isc.org. @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20124
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1452
;; QUESTION SECTION:
;isc.org. IN SOA
;; ANSWER SECTION:
isc.org. 6914 IN SOA ns-int.isc.org. hostmaster.isc.org. 2018092500 7200 3600 24796800 3600
;; Query time: 16 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Mon Oct 01 13:40:13 CEST 2018
;; MSG SIZE rcvd: 90
```
Test suite:
https://gitlab.isc.org/isc-projects/DNS-Compliance-Testing
run `genreport -R` with input like:
`nic.cz. resolver.test. 1.1.1.1`
Output at the moment:
```
nic.cz. @1.1.1.1 (resolver.test.): dns=ok edns=ok edns1=noerror,badversion,soa edns@512=ok ednsopt=ok edns1opt=noerror,badversion,soa do=ok ednsflags=ok optlist=ok signed=ok,yes ednstcp=ok
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/403Restrict how long a delegation can be refreshed in cache2020-02-28T09:55:02+01:00Marek VavrusaRestrict how long a delegation can be refreshed in cacheCurrently the NS record for domain delegation can be refreshed in cache with queries arriving near it's expiration time. This is good because the NS record can be prefetched ahead of time, but it also means when a domain moves to a diffe...Currently the NS record for domain delegation can be refreshed in cache with queries arriving near it's expiration time. This is good because the NS record can be prefetched ahead of time, but it also means when a domain moves to a different DNS provider, resolver will never know as long as the NS record is getting refreshed from child side of the delegation, as it will never go back to the TLD to check if the zone delegation changed.
In order to fix this, the resolver will have to track how was the NS record cached. One possible solution is to add an inception time which would only be updated when NS record first enters cache from it's parent, or restrict the amount of times a record can be updated before it's expired, or just prevent NS records from being updated until they're fully expired.
What's the best way to fix this?