Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2020-04-16T09:05:12+02:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/563cache problems with DRBD backed systems2020-04-16T09:05:12+02:00Matt Taggartcache problems with DRBD backed systemsWe have a VM that is running knot resolver(5.0.1) and that VM is managed by [Ganeti](http://www.ganeti.org/) and is running on top of [DRBD](https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device) with LVM LVOLs hosted on SSDs...We have a VM that is running knot resolver(5.0.1) and that VM is managed by [Ganeti](http://www.ganeti.org/) and is running on top of [DRBD](https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device) with LVM LVOLs hosted on SSDs below that. DRBD mirrors the disk across servers so ganeti can live migrate VMs from server to server.
This VM is a moderately loaded webserver, so it doing quite a few lookups, but overall traffic on the public interface is under 0.4Mbit/sec. But when we look at the network bandwidth associated with the DRBD device we are seeing 400Mbit/sec (50MBytes/sec). As an experiment we put the cache in a tmpfs as suggested in the [docs](https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#persistence) and that completely eliminated the traffic.
So something about the way the disk caching is working is resulting in i/o patterns that don't work with DRBD. A couple things I thought of:
* DRBD only sends writes across the wire, reads can be resolved locally. So this is some sort of write i/o
* This is not just lookup associated i/o being written to cache, as I mentioned the traffic into the system is much lower. So there is some sort of amplification going on here. Maybe large sections of cache are getting flushed every time there is a small update?
* How often is data in RAM being flushed to the disk cache? Does it flush for every query or is it timed or when hitting some size threshold? Say it flushes every 10 seconds, does that mean on server reboot that the cache that gets loaded would only be missing the last 10 seconds of queries?
* We have other VMs that are running knot resolver and are not seeing this issue, but they don't do as much web traffic. We suspect they are also causing more i/o, but we don't have good numbers for this yet.
I think a good test would be setting up a DRBD between two hosts and putting the cache on that (and that would help rule out qemu/ganeti/LVM) and then hitting it with a bunch of lookups.
Let me know if you have questions or ideas of things to try, or need more details to reproduce. Thanks.https://gitlab.nic.cz/knot/knot-resolver/-/issues/562error: /usr/lib/knot-resolver/kres_modules/prefill.lua:32: attempt to index f...2020-04-14T09:23:55+02:00Gaspard d'Hautefeuilleerror: /usr/lib/knot-resolver/kres_modules/prefill.lua:32: attempt to index field 'bg_worker' (a nil value)Hi,
I have this small error when I use the prefill module in latest version.
Thanks,
HLFHHi,
I have this small error when I use the prefill module in latest version.
Thanks,
HLFHhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/559handle conflicting trust anchor & negative trust anchor definitions2020-05-07T08:36:57+02:00Vladimír Čunátvladimir.cunat@nic.czhandle conflicting trust anchor & negative trust anchor definitionsPeople could reasonably expect that adding a root negative trust anchors would disable validation (everywhere)
```lua
trust_anchors.set_insecure({'.'})
```
but that is not so, at least if built with `-Dkeyfile_default=foo` (usual in dist...People could reasonably expect that adding a root negative trust anchors would disable validation (everywhere)
```lua
trust_anchors.set_insecure({'.'})
```
but that is not so, at least if built with `-Dkeyfile_default=foo` (usual in distros; maybe in some other configs as well).
Our documented way to _completely_ disable validation seems to work
```lua
trust_anchors.remove('.')
```
and we certainly discourage such things, so I don't expect this to be an important issue. In particular, using NTAs below root seems to work fine. _I suspect the issue is having both TA and NTA on the same name._https://gitlab.nic.cz/knot/knot-resolver/-/issues/558Redudant parallel queries for nonexistent AAAA records generated when quering...2021-01-04T11:03:37+01:00Štěpán BalážikRedudant parallel queries for nonexistent AAAA records generated when quering for names from one zone repeatatedlySteps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 6048...Steps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 604800 86400
*. 3600 IN A 1.1.1.1
. 86400 IN NS j.root-servers.net.
j.root-servers.net. 86400 IN A 1.0.0.100
```
2. Start resolver on 1.0.0.1 with these root hints capturing the traffic to PCAP
```
. 3600000 NS J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET. 3600000 A 1.0.0.100
```
3. Query the resolver with questions to the root zone triggering cache misses (with `for i in $(seq 0 $(( 2 * N ))); do echo "$i A"; done | dnsperf -q $N -a 2.0.0.1 -s 1.0.0.1`). Capture the traffic to a PCAP file.
Note that `-q` option in `dnsperf` sets the maximum number of outstanding queries.
Now observe the PCAP. First N answers from resolver look OK, the next N however are more than 4 times slower than the rest.
This is a example for N=20:
![image](/uploads/b0f7ac9519b792f7ccb0e431a01f5caa/image.png)
Verbose log for all of these looks like this:
```
[00000.00][plan] plan '20.' type 'A' uid [00020.00]
[00020.00][iter] '20.' type 'A' new uid was assigned .01, parent uid .00
[00020.01][cach] => no NSEC* cached for zone: .
[00020.01][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.01][resl] => going insecure because there's no covering TA
[00020.01][zcut] found cut: . (rank 020 return codes: DS -2, DNSKEY -2)
[00020.01][plan] plan 'j.root-servers.net.' type 'AAAA' uid [00020.02]
[00020.02][iter] 'j.root-servers.net.' type 'AAAA' new uid was assigned .03, parent uid .01
[00020.03][cach] => no NSEC* cached for zone: .
[00020.03][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.03][iter] <= rcode: NOERROR
[00020.03][iter] <= retrying with non-minimized name
[00020.03][cach] => not overwriting NS net.
[00020.03][resl] <= server: '1.0.0.100' rtt: 64 ms
[00020.03][iter] 'j.root-servers.net.' type 'AAAA' new uid was assigned .04, parent uid .01
[00020.04][iter] <= rcode: NOERROR
[00020.04][cach] => not overwriting AAAA j.root-servers.net.
[00020.04][resl] <= server: '1.0.0.100' rtt: 101 ms
[00020.01][iter] '20.' type 'A' new uid was assigned .05, parent uid .00
[00020.05][plan] plan 'j.root-servers.net.' type 'A' uid [00020.06]
[00020.06][iter] 'j.root-servers.net.' type 'A' new uid was assigned .07, parent uid .05
[00020.07][cach] => skipping unfit NS packet: rank 020, new TTL 86400
[00020.07][cach] => no NSEC* cached for zone: .
[00020.07][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[00020.07][iter] <= rcode: NOERROR
[00020.07][iter] <= retrying with non-minimized name
[00020.07][cach] => not overwriting NS net.
[00020.07][resl] <= server: '1.0.0.100' rtt: 100 ms
[00020.07][iter] 'j.root-servers.net.' type 'A' new uid was assigned .08, parent uid .05
[00020.08][iter] <= rcode: NOERROR
[00020.08][cach] => not overwriting A j.root-servers.net.
[00020.08][resl] <= server: '1.0.0.100' rtt: 100 ms
[00020.05][iter] '20.' type 'A' new uid was assigned .09, parent uid .00
[00020.09][resl] => id: '42223' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '20.' qtype: 'A' proto: 'udp'
[00020.09][iter] <= rcode: NOERROR
[00020.09][cach] => stashed 20. A, rank 020, 20 B total, incl. 0 RRSIGs
[00020.09][resl] <= server: '1.0.0.100' rtt: 103 ms
[00020.09][resl] AD: request NOT classified as SECURE
[00020.09][resl] finished: 4, queries: 3, mempool: 16400 B
```
Two `AAAA` and one `A` resolver queries are generated unnecessarily for each of these client queries.https://gitlab.nic.cz/knot/knot-resolver/-/issues/557Resolver retransmits too early2021-01-04T11:25:14+01:00Štěpán BalážikResolver retransmits too earlySteps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 6048...Steps to reproduce on an isolated network:
1. Setup a authoritative on 1.0.0.100 with simulated static (100 ms) latency to resolver with this zone
```
. 86400 IN SOA j.root-servers.net. nstld.verisign-grs.com. 2019072500 1800 900 604800 86400
*. 3600 IN A 1.1.1.1
. 86400 IN NS j.root-servers.net.
j.root-servers.net. 86400 IN A 1.0.0.100
```
2. Start resolver on 1.0.0.1 with these root hints capturing the traffic to PCAP
```
. 3600000 NS J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET. 3600000 A 1.0.0.100
```
3. Query the resolver with questions to the root zone triggering cache misses (with `for i in $(seq 0 9999); do echo "$i A"; done | dnsperf -q 100 -a 2.0.0.1 -s 1.0.0.1`. Capture the traffic to a PCAP file.
4. Observe the PCAP for `Destination unreachable (Port unreachable)` packets which are results of the early retransmits (as seen in the screenshot from Wireshark below).
![image](/uploads/65e2c62d0b22abdb5f4c89e2c806e37b/image.png)
These retransmits happen after 110 to 150 ms after the original transmit. This effect is more pronounced if the number of outstanding queries is higher (the `-q` argument to `dnsperf`). About 1 % of queries from this test result in retransmit.
Verbose log the query from screenshot show the two queries as well
```
[00000.00][plan] plan '7251.' type 'A' uid [07251.00]
[07251.00][iter] '7251.' type 'A' new uid was assigned .01, parent uid .00
[07251.01][cach] => no NSEC* cached for zone: .
[07251.01][cach] => skipping zone: ., NSEC, hash 0;new TTL -123456789, ret -2
[07251.01][resl] => going insecure because there's no covering TA
[07251.01][zcut] found cut: . (rank 020 return codes: DS -2, DNSKEY -2)
[07251.01][resl] => id: '24592' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '7251.' qtype: 'A' proto: 'udp'
[07251.01][resl] => id: '24592' querying: '1.0.0.100#00053' score: 100 zone cut: '.' qname: '7251.' qtype: 'A' proto: 'udp'
[07251.01][iter] <= rcode: NOERROR
[07251.01][cach] => stashed 7251. A, rank 020, 20 B total, incl. 0 RRSIGs
[07251.01][resl] <= server: '1.0.0.100' rtt: 148 ms
[07251.01][resl] AD: request NOT classified as SECURE
[07251.01][resl] finished: 4, queries: 1, mempool: 16400 B
```
Now that I think of it, this is probably caused by the packet being stuck in a buffer somewhere, but I think resolver should cope with this in better ways than to generate more traffic. 110 ms timeout seems too low for a server with 100 ms latency. This is therefore related to #447.https://gitlab.nic.cz/knot/knot-resolver/-/issues/556policy: filters that use query don't work with postules2020-04-03T15:32:48+02:00Tomas Krizekpolicy: filters that use query don't work with postules`policy.suffix` or `policy.pattern` filters don't work when policy is evaluated as a postrule, because in the finish phase, `req:current()` is nil.
One use case that doesn't work is `qname` filter with `reroute`/`rewrite` action in the ...`policy.suffix` or `policy.pattern` filters don't work when policy is evaluated as a postrule, because in the finish phase, `req:current()` is nil.
One use case that doesn't work is `qname` filter with `reroute`/`rewrite` action in the `daf` module:
```
daf.add('qname = example.com reroute 192.0.2.0/24-127.0.0.0')
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/554Lua command map() does not work with multiple instances started using systemd2020-10-27T11:55:28+01:00Petr ŠpačekLua command map() does not work with multiple instances started using systemdThis affects all instances which do not use `-f` option (which is deprecated anyway).
We need to rewrite `map()` command to use control sockets (instead of pipes inherited from parent process) or replace it with something completely dif...This affects all instances which do not use `-f` option (which is deprecated anyway).
We need to rewrite `map()` command to use control sockets (instead of pipes inherited from parent process) or replace it with something completely different.https://gitlab.nic.cz/knot/knot-resolver/-/issues/553Bugs on Daf method del2020-04-02T14:11:05+02:00realPyBugs on Daf method delThere was a mistake in the del method of the daf modules.
The id of the policy rules is not the key for delete the entry in the daf.rules array
See the attached patch
[patch](/uploads/c2a7e5bda6e23e61b747ce57fd037223/patch)There was a mistake in the del method of the daf modules.
The id of the policy rules is not the key for delete the entry in the daf.rules array
See the attached patch
[patch](/uploads/c2a7e5bda6e23e61b747ce57fd037223/patch)https://gitlab.nic.cz/knot/knot-resolver/-/issues/552Segmentation fault in stats.c2020-03-25T16:08:08+01:00MartBSegmentation fault in stats.cHey there just upgraded to fedora 32 and kresd started crashing in the following line:
https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.0.1/modules/stats/stats.c#L496
```
Program received signal SIGSEGV, Segmentation fault.
0x0...Hey there just upgraded to fedora 32 and kresd started crashing in the following line:
https://gitlab.labs.nic.cz/knot/knot-resolver/-/blob/v5.0.1/modules/stats/stats.c#L496
```
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff6ba20fe in stats_init (module=0x5555555d6170) at ../modules/stats/stats.c:496
496 sa->sa_family = AF_UNSPEC;
(gdb) l
491 if (array_reserve(data->upstreams.q, UPSTREAMS_COUNT) != 0) {
492 return kr_error(ENOMEM);
493 }
494 for (size_t i = 0; i < UPSTREAMS_COUNT; ++i) {
495 struct sockaddr *sa = (struct sockaddr *)&data->upstreams.q.at[i];
496 sa->sa_family = AF_UNSPEC;
497 }
498 return kr_ok();
499 }
500
(gdb) print sa
$2 = (struct sockaddr *) 0x0
(gdb) print data->upstreams
$3 = {q = {at = 0x5555555efba0, len = 0, cap = 1024}, head = 0}
```
Why does sa evaluate to 0x0, did something change in regards to memory allocation for the used ring-buffer?
I can't quite figure out what's going on here, any help is appreciated.https://gitlab.nic.cz/knot/knot-resolver/-/issues/550LMDB error: MDB_BAD_RSLOT2020-03-06T11:21:45+01:00Jiří Helebrantjiri.helebrant@nic.czLMDB error: MDB_BAD_RSLOTI somehow managed to crash kresd while testing [adam/dns-crawler](https://gitlab.labs.nic.cz/adam/dns-crawler) (ie. a lot of queries).
end of the log:
```
[cache] MDB_BAD_TXN, probably overfull
[cache] clearing because overfull, ret = ...I somehow managed to crash kresd while testing [adam/dns-crawler](https://gitlab.labs.nic.cz/adam/dns-crawler) (ie. a lot of queries).
end of the log:
```
[cache] MDB_BAD_TXN, probably overfull
[cache] clearing because overfull, ret = -28
[cache] LMDB error: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
[cache] LMDB error: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
```
- [full kresd.log](/uploads/baa8516ff8198277ad9fc5255a24bbef/kresd.log) (with bogus_log, so probably `grep -v DNSSEC`…)
- [kresd.conf](/uploads/a9b8a558208e2b6847a88cb986dbe471/kresd.conf)
- `MDB_BAD_RSLOT` seems to come from [mdb.c#L2693](https://github.com/LMDB/lmdb/blob/LMDB_0.9.24/libraries/liblmdb/mdb.c#L2693)
- knot-resolver 5.0.1, lmdb 0.9.24, knot/libknot 2.9.2, gentoohttps://gitlab.nic.cz/knot/knot-resolver/-/issues/549lib/knot-resolver/sandbox.lua:399: can't open cache path '.';2020-03-06T09:10:15+01:00Yaremalib/knot-resolver/sandbox.lua:399: can't open cache path '.';with version 5.0.1 on FreeBSD 12.1 I get the following when starting `kresd`
```
kresd -n -c /usr/local/etc/knot-resolver/kresd.conf /var/db/kresd
[system] error while loading config: /usr/local/lib/knot-resolver/sandbox.lua:399: can't o...with version 5.0.1 on FreeBSD 12.1 I get the following when starting `kresd`
```
kresd -n -c /usr/local/etc/knot-resolver/kresd.conf /var/db/kresd
[system] error while loading config: /usr/local/lib/knot-resolver/sandbox.lua:399: can't open cache path '.'; working directory '/var/db/kresd'; Invalid argument (workdir '/var/db/kresd')
```
the LMDB files are created as specified with `cache.size` but then kresd gives up with the above error.https://gitlab.nic.cz/knot/knot-resolver/-/issues/547SERVFAIL when VPN active2020-03-02T10:59:40+01:00Leonardo Brondani SchenkelSERVFAIL when VPN activeKnot Resolver, version 3.2.1, shipped with TurrisOS 4.0.5.
When I'm not using any VPN, all domains resolve. When I enable my VPN, and that substitutes my default gateway, then domains such as `bit.ly` and `storage.googleapis.com` no lon...Knot Resolver, version 3.2.1, shipped with TurrisOS 4.0.5.
When I'm not using any VPN, all domains resolve. When I enable my VPN, and that substitutes my default gateway, then domains such as `bit.ly` and `storage.googleapis.com` no longer resolve and `kresd` returns `SERVFAIL`. If I disable the VPN those domains immediately start resolving again.
I see no evidence of tampering from the VPN side, since querying via `dig @1.1.1.1` and `dig @8.8.8.8` works. And if I enable TLS forward to CloudFlare, the same behaviour persists (but only when the VPN is active).
This was not the case some time ago. I haven't changed the router configuration, and this behaviour started happening recently. I presume some recent update triggered it.
I didn't see any specific errors in the logs that could shed any light into this behaviour. I am a developer myself and fairly technical. Please let me know any particular configuration files or logs you want me to include, or any troubleshooting steps I can take. I'm a bit lost as to why this is happening and don't know how to diagnose it.
This seems to be a very similar report: https://forum.turris.cz/t/openvpn-dns-not-working-when-connected-to-protonvpn/11365https://gitlab.nic.cz/knot/knot-resolver/-/issues/546[webmgmt] Use javascript secure scheme detection instead of server detection2020-02-10T12:38:28+01:00analogic[webmgmt] Use javascript secure scheme detection instead of server detectionPlease see:
https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/http/static/kresd.js#L335
This wont work with latest browsers when we are using reverse proxy (with auth) like this:
`(client) -> https://company.com/mgmt (...Please see:
https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/modules/http/static/kresd.js#L335
This wont work with latest browsers when we are using reverse proxy (with auth) like this:
`(client) -> https://company.com/mgmt (reverse-proxy) -> http://resolver:8453`
result is: `ws://...` which gets blocked because it is not safe to use on https
Better should be something like this, which get evaluated entirely by browser:
```
var wsStats = ('https:' == document.location.protocol ? 'wss://' : 'ws://') + location.host + '/stats';
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/545LUA error blocks http server2020-02-28T09:43:09+01:00analogicLUA error blocks http server```
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr...```
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr/share/lua/5.1/http/h2_stream.lua:565: in function 'handler'
/usr/share/lua/5.1/http/h2_connection.lua:204: in function 'handle_frame'
/usr/share/lua/5.1/http/h2_connection.lua:243: in function 'step'
/usr/share/lua/5.1/http/h2_connection.lua:352: in function 'get_next_incoming_stream'
/usr/share/lua/5.1/http/server.lua:147: in function </usr/share/lua/5.1/http/server.lua:127>
[worker.background] error: /usr/share/lua/5.1/http/h2_stream.lua:88: invalid state progression ('closed' to 'closed') stack traceback:
[C]: in function 'error'
/usr/share/lua/5.1/http/h2_stream.lua:88: in function 'set_state'
/usr/share/lua/5.1/http/h2_stream.lua:565: in function 'handler'
/usr/share/lua/5.1/http/h2_connection.lua:204: in function 'handle_frame'
/usr/share/lua/5.1/http/h2_connection.lua:243: in function 'step'
/usr/share/lua/5.1/http/h2_connection.lua:352: in function 'get_next_incoming_stream'
/usr/share/lua/5.1/http/server.lua:147: in function </usr/share/lua/5.1/http/server.lua:127>
```
I am trying to setup Lets encrypt+clientCA with traefik in front of webmgmt of knot resolver. It works for first two browser opened sessions and then these errors appear and web server gets blocked...https://gitlab.nic.cz/knot/knot-resolver/-/issues/544Debian Repo2020-02-05T16:15:28+01:00Magnus FrühlingDebian RepoHi there,
is there any plan for a Debian repo again?
similar to `deb https://deb.knot-dns.cz/knot-resolver/ stretch main` back than...Hi there,
is there any plan for a Debian repo again?
similar to `deb https://deb.knot-dns.cz/knot-resolver/ stretch main` back than...https://gitlab.nic.cz/knot/knot-resolver/-/issues/543kres-cache-gc.service has wrong cache path2020-02-03T14:38:55+01:00Jean-Danielkres-cache-gc.service has wrong cache pathVersion: 5.0.0
The distributed kres-cache-gc.service file specifies the wrong path for the cache directory, making it pointless.
The default cache path is `/var/cache/knot-resolver` and it uses `/var/lib/knot-resolver`, resulting in th...Version: 5.0.0
The distributed kres-cache-gc.service file specifies the wrong path for the cache directory, making it pointless.
The default cache path is `/var/cache/knot-resolver` and it uses `/var/lib/knot-resolver`, resulting in the log being cluttered by `Error: /var/lib/knot-resolver does not exist or is not a LMDB` messages.
For the record, this is the installed file:
```
[Unit]
Description=Knot Resolver Garbage Collector daemon
Documentation=man:kresd.systemd(7)
Documentation=man:kresd(8)
[Service]
Type=simple
ExecStart=/usr/sbin/kres-cache-gc -c /var/lib/knot-resolver -d 1000
User=knot-resolver
Group=knot-resolver
Restart=on-failure
RestartSec=30
StartLimitInterval=400
StartLimitBurst=10
Slice=system-kresd.slice
[Install]
WantedBy=kresd.target
```
Env: Ubuntu 18.04 with apt source: https://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-latest/xUbuntu_18.04/ /
https://gitlab.nic.cz/knot/knot-resolver/-/issues/542[tls_client] session resumption does not work properly2022-02-18T11:53:56+01:00Vladimír Čunátvladimir.cunat@nic.cz[tls_client] session resumption does not work properlyIt doesn't break handshake but resumption never happens. Maybe it's broken just on TLS 1.3, or some similar condition. I tried this with quad-{1,8,9} and it looks the same in verbose log.
We do receive resumption tickets from upstream...It doesn't break handshake but resumption never happens. Maybe it's broken just on TLS 1.3, or some similar condition. I tried this with quad-{1,8,9} and it looks the same in verbose log.
We do receive resumption tickets from upstream
```
[gnutls] (4) HSK[0x1644310]: NEW SESSION TICKET (4) was received. Length 246[496], frag offset 0, frag length: 246, sequence: 0
```
but never send it on on re-connection (no idea why so far)
```
[gnutls] (4) EXT[0x1644310]: Preparing extension (Session Ticket/35) for 'client hello'
[gnutls] (4) EXT[0x1644310]: Sending extension Session Ticket/35 (0 bytes)
```
and thus the session can't resume.
```
[tls_client] TLS session has not resumed
```
_Tested with latest releases: Knot Resolver 4.3.0 and GnuTLS 3.6.11.1._https://gitlab.nic.cz/knot/knot-resolver/-/issues/541CI: optimize packaging tests2020-05-27T10:54:30+02:00Petr ŠpačekCI: optimize packaging testsPackaging tests merged in !892 do their job, but it is too slow for automated run on every commit.
Ideas for improvement:
- [ ] use py.test framework
- [ ] use own image cache instead of implicit and imperfecr Docker build cache
- [ ] e...Packaging tests merged in !892 do their job, but it is too slow for automated run on every commit.
Ideas for improvement:
- [ ] use py.test framework
- [ ] use own image cache instead of implicit and imperfecr Docker build cache
- [ ] explicitly split base-image preparation from test itself
@tkrizek has even more ideas.https://gitlab.nic.cz/knot/knot-resolver/-/issues/540redirecting a name elsewhere using user-supplied CNAME (Google SafeSearch)2019-12-20T13:07:17+01:00Mr. Blue Coatredirecting a name elsewhere using user-supplied CNAME (Google SafeSearch)Sorry if this is obvious but I couldn't find any info on how to enable Google SafeSearch with Knot-Resolver using CNAME: https://www.leowkahman.com/2017/09/11/enforce-safe-search-on-google-youtube-bing/Sorry if this is obvious but I couldn't find any info on how to enable Google SafeSearch with Knot-Resolver using CNAME: https://www.leowkahman.com/2017/09/11/enforce-safe-search-on-google-youtube-bing/https://gitlab.nic.cz/knot/knot-resolver/-/issues/538lower default EDNS buffer size to 12322020-10-26T15:18:36+01:00Petr Špačeklower default EDNS buffer size to 1232Default needs to follow https://dnsflagday.net/2020/
This should prevent #300 from happening.Default needs to follow https://dnsflagday.net/2020/
This should prevent #300 from happening.