Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2022-02-06T18:46:56+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/720Control sockets on relative paths fails2022-02-06T18:46:56+01:00Vaclav SraierControl sockets on relative paths failsWith this config:
```
local path = '/tmp/control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
everything works perfectl...With this config:
```
local path = '/tmp/control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
everything works perfectly.
This config though:
```
local path = './control/1'
local ok, err = pcall(net.listen, path, nil, { kind = 'control' })
if not ok then
log_warn(ffi.C.LOG_GRP_NETWORK, 'bind to '..path..' failed '..err)
end
```
Fails with this error message:
```
Feb 05 23:03:41 dingo kresd[169462]: [net ] bind to './control/1@53' (TCP): Invalid argument
Feb 05 23:03:41 dingo kresd[169462]: [net ] bind to ./control/1 failed error occurred here (config filename:lineno is at the bottom, if config is involved):
Feb 05 23:03:41 dingo kresd[169462]: stack traceback:
Feb 05 23:03:41 dingo kresd[169462]: [C]: at 0x556c94d0eae0
Feb 05 23:03:41 dingo kresd[169462]: [C]: in function 'pcall'
Feb 05 23:03:41 dingo kresd[169462]: kresd_1.conf:144: in main chunk
Feb 05 23:03:41 dingo kresd[169462]: ERROR: net.listen() failed to bind
```
It looks like the `kind` argument is completely ignored and defaults are assumed (UDP + TCP on port 53).
EDIT: Tested on `a2c339a57b8a6fb1c6bbaa83ed4bfdbe742a5fd0` (HEAD of `manager` branch)https://gitlab.nic.cz/knot/knot-resolver/-/issues/715integration of manager into kresd2023-09-28T04:50:13+02:00Tomas Krizekintegration of manager into kresdLet this issue be a checklist of requirements/ideas that need to be done before we're ready to merge manager into master. Feel free to edit the description and add your TODOs as well.
### Requirements
- [ ] config: verify that all valu...Let this issue be a checklist of requirements/ideas that need to be done before we're ready to merge manager into master. Feel free to edit the description and add your TODOs as well.
### Requirements
- [ ] config: verify that all values in the datamodel jinja2 templates are either (a) escaped or (b) validated before use (to prevent code injection from declarative values to lua) [goal: security - API should not be abusable] (related !1291)
- [ ] config: ensure all recently added lua configuration options have been added to declarative config as well (e.g. go through NEWS file and check) and make sure it won't be a problem in future.
- [x] new config for kresd < 5.5.0 !1289
- [ ] new config for kresd >= 5.5.0
- [x] new declarative policy module !1313
- [ ] config: update our default/example [configs](https://gitlab.nic.cz/knot/knot-resolver/-/tree/master/etc/config)
- [x] packaging: ensure all manager's dependencies have been properly added in `distro/pkg` (related !1248)
- [x] packaging: cover the most basic use-cases by packaging tests executed on all target distros (related #713)
- [ ] tests: manually test migration path on all target distros
- [x] usability: prepare [systemd files](https://gitlab.nic.cz/knot/knot-resolver/-/tree/master/systemd) for manager
- [x] usability: figure out how to support declarative config on unsupported platforms (CentOS7) and in our [docker image](https://gitlab.nic.cz/knot/knot-resolver/-/blob/master/Dockerfile) (related #734)
- [x] usability: ensure that the manager is applicable to ODVR usecase (separate workers/instances for each DNS protocol)
- [x] docs: document new way of using kresd with manager, including systemd interaction, quick start guide, declarative config docs, how to get logs etc.
### Suggestions
- [ ] tests: comprehensive unit tests of configuration: prepare a collection of example declarative configs and their lua counterparts; use CLI conversion tool to verify these
- [ ] logging: ensure logs from manager look consistent with kresd logs
- [x] logging: try to find a way to display aggregated log output
- [x] usability: support supervisord for containers
- [ ] usability: keep manager component optional (minimal use-case: only run config conversion, but use current kresd@1 approach)
- [ ] blog: blogpost(s) about the manager, comparison with `kresd@`, benefits, examples6.0.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/714meson_version needs increasing2023-09-28T04:54:08+02:00daurnimatormeson_version needs increasingThe following warning appears when building:
```
Build targets in project: 31
WARNING: Project specifies a minimum meson_version '>=0.49' but uses features which were added in newer versions:
* 0.52.0: {'priority arg in test'}
NOTICE: F...The following warning appears when building:
```
Build targets in project: 31
WARNING: Project specifies a minimum meson_version '>=0.49' but uses features which were added in newer versions:
* 0.52.0: {'priority arg in test'}
NOTICE: Future-deprecated features used:
* 0.56.0: {'Dependency.get_pkgconfig_variable'}
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/709datamodel: network: more readable 'kind' in listen interfaces2023-09-28T04:54:23+02:00Aleš Mrázekdatamodel: network: more readable 'kind' in listen interfaces- `dns-over-https` -> `doh`
- `dns-over-tls` -> `dot`- `dns-over-https` -> `doh`
- `dns-over-tls` -> `dot`https://gitlab.nic.cz/knot/knot-resolver/-/issues/707Add integration test with some complex configuration2023-09-28T04:54:43+02:00Vaclav SraierAdd integration test with some complex configurationFor example try to translate configuration from ODVR and see if it works. The ODVR configuration can be found in the discussion of issue knot-resolver-manager#38For example try to translate configuration from ODVR and see if it works. The ODVR configuration can be found in the discussion of issue knot-resolver-manager#38https://gitlab.nic.cz/knot/knot-resolver/-/issues/704Add tests for all quick start configuration snippets in kresd documentation2023-09-28T04:55:01+02:00Vaclav SraierAdd tests for all quick start configuration snippets in kresd documentationhttps://knot-resolver.readthedocs.io/en/stable/modules-policy.htmlhttps://knot-resolver.readthedocs.io/en/stable/modules-policy.htmlAleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/700kresd process manager: decouple restarts from config change requests2022-11-19T20:51:24+01:00Vaclav Sraierkresd process manager: decouple restarts from config change requests- goals:
- increase throughput for config changes
- limitations:
- we can't make a config change faster as we have to restart everything
- proposed solution:
- keep track of config versions and restart `kresd`s continuously decoupl...- goals:
- increase throughput for config changes
- limitations:
- we can't make a config change faster as we have to restart everything
- proposed solution:
- keep track of config versions and restart `kresd`s continuously decoupled from requests. Mark request as finished when the config version of all `kresd`s is higherhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/691How to use static hints for local PTR records?2021-12-25T17:55:15+01:00Jon PolomHow to use static hints for local PTR records?Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps ...Is it possible to use the static hints module to provide local PTR records? This is [hinted at](https://knot-resolver.readthedocs.io/en/stable/modules-hints.html#static-hints) in the documentation however no example is provided. Perhaps I am misinterpreting what is possible with kresd so if that is the case, please clarify.https://gitlab.nic.cz/knot/knot-resolver/-/issues/687serve_stale module doesn't provide stale answers when auths are unresponsive2022-03-09T11:16:31+01:00Tomas Krizekserve_stale module doesn't provide stale answers when auths are unresponsiveAs of version 5.4.2, `serve_stale` module doesn't work when auth servers are unresponsive (which is the typical case with network issues). The server selection algorithm tries very hard to resolve the request by re-trying different auth ...As of version 5.4.2, `serve_stale` module doesn't work when auth servers are unresponsive (which is the typical case with network issues). The server selection algorithm tries very hard to resolve the request by re-trying different auth servers and increasing their allowed timeouts, until the request ultimately times out and returns SERVFAIL instead of a stale answer.
If the auth servers are reachable but REFUSE to respond, the serve_stale module works as expected (that was our former test case with deckard).
Some notes about possible resolution:
- to be useful for clients, the stale answer should be provided quickly enough ([RFC 8767.5](https://datatracker.ietf.org/doc/html/rfc8767#section-5) suggests sending stale answer after 1.8s). The timeout used for serve_stale should ideally be configurable.
- the request resolution should keep going even after the stale answer is sent to the client to refresh data from slower auth severs (possible option: spawn a new duplicate internal request after providing the stale answer?)
- server selection should have a configurable time limit that is respected and allows serve_stale to activate in time
- the server selection time limit shouldn't be used unless serve_stale module is loaded _and_ there is a possible stale answer in the cachehttps://gitlab.nic.cz/knot/knot-resolver/-/issues/684ANSWER section not empty on SERVFAIL2021-11-04T10:58:48+01:00Tomas KrizekANSWER section not empty on SERVFAILIn some cases, the ANSWER section contains (unvalidated) data while the request ends with SERVFAIL.
In my specific conditions, the issue seems reproducible when:
- cache is clear
- IPv6 isn't available, but isn't turned off with net.ipv...In some cases, the ANSWER section contains (unvalidated) data while the request ends with SERVFAIL.
In my specific conditions, the issue seems reproducible when:
- cache is clear
- IPv6 isn't available, but isn't turned off with net.ipv6
- server selection chooses specific servers (and typically chooses the non-functioning IPv6 ones)
```
$ kdig @::1 -p 5553 +timeout=16 +edns signotincepted.bad-dnssec.wb.sidnlabs.nl
;; ->>HEADER<<- opcode: QUERY; status: SERVFAIL; id: 6998
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: ; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; signotincepted.bad-dnssec.wb.sidnlabs.nl. IN A
;; ANSWER SECTION:
signotincepted.bad-dnssec.wb.sidnlabs.nl. 3600 IN A 94.198.159.39
;; Received 85 B
;; Time 2021-11-04 10:45:32 CET
;; From ::1@5553(UDP) in 10027.7 ms
```
See attached [log.txt](/uploads/8d1aa54458e26860a5d0f4e36d105cad/log.txt)https://gitlab.nic.cz/knot/knot-resolver/-/issues/683performance problem because of shared cache2021-10-26T11:50:06+02:00Hamza Kılıçperformance problem because of shared cacheI am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting ...I am making benchmarks for a project. And sending 10M queries to resolvers for test.
- Every test starts with cold start.
- Opening 8 process.
- measuring %core, pps, elapsed miliseconds, and download Mbps.
I founded an interesting result.
Opening 8 process with shared cache at the same folder (/var/cache/knot-resolver) vs 8 process with different cache folders
results look like these values (approximately)
- each core (every 8 cores)
- % 60 - %99
- pps
- 20000- 30000
- elapsed miliseconds
- 500 - 300
- download Mbps
- 100 - 160
conclusion: using shared cache slows down performance dramatically.
Is there a way to fix this problem?https://gitlab.nic.cz/knot/knot-resolver/-/issues/679DNSSEC failure on insecure subzone2021-10-23T10:08:30+02:00Tomas KrizekDNSSEC failure on insecure subzoneReported on [knot-resolver-users](https://lists.nic.cz/pipermail/knot-resolver-users/2021/000396.html) by Matthew Richardson
Attempting to resolve `213-133-203-34.newtel.in-addr.itconsult.net. PTR` ends up with a DNSSEC failure, even to...Reported on [knot-resolver-users](https://lists.nic.cz/pipermail/knot-resolver-users/2021/000396.html) by Matthew Richardson
Attempting to resolve `213-133-203-34.newtel.in-addr.itconsult.net. PTR` ends up with a DNSSEC failure, even tough the record itself is in an insecure subzone.
> The zone cut is between itconsult.net & newtel.in-addr.itconsult.net.
> Also whilst itconsult.net is DNSSEC signed, newtel.in-addr.itconsult.net is
> not. Thus, in-addr.itconsult.net is an empty non-terminal.
>
> If one asks for NS for newtel.in-addr.itconsult.net, thereafter resolution
> of the PTR then succeeds
```
[plan ][00000.00] plan '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' uid [51359.00]
[iterat][51359.00] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .01, parent uid .00
[cache ][51359.01] => skipping exact RR: rank 027 (min. 030), new TTL 43131
[cache ][51359.01] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.01] => NSEC3 depth 3: hash uabfrhboj2pe1qnmfscd0adr77hqoirb
[cache ][51359.01] => NSEC3 encloser error for 213-133-203-34.newtel.in-addr.itconsult.net.: range search miss (!covers)
[cache ][51359.01] => NSEC3 depth 2: hash 7kdfmdhll7ee02vprj1oivl33lg5r7vu
[cache ][51359.01] => NSEC3 encloser error for newtel.in-addr.itconsult.net.: range search miss (!covers)
[cache ][51359.01] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.01] => NSEC3 encloser: only found existence of an ancestor
[cache ][51359.01] => skipping zone: itconsult.net., NSEC, hash 0;new TTL -123456789, ret -2
[zoncut][51359.01] found cut: itconsult.net. (rank 002 return codes: DS 0, DNSKEY 0)
[select][51359.01] => id: '47786' choosing: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' with timeout 400 ms zone cut: 'itconsult.net.'
[resolv][51359.01] => id: '47786' querying: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' zone cut: 'itconsult.net.' qname: 'iN-ADDR.iTConSult.neT.' qtype: 'NS' proto: 'udp'
[select][51359.01] NO6: timeouted, appended, timeouts 5/6
[select][51359.01] => id: '47786' noting selection error: 'd.itconsult-dns.co.uk.'@'2001:67c:10b8::100#00053' zone cut: 'itconsult.net.' error: 1 QUERY_TIMEOUT
[iterat][51359.01] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .02, parent uid .00
[select][51359.02] => id: '56910' choosing: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' with timeout 38 ms zone cut: 'itconsult.net.'
[resolv][51359.02] => id: '56910' querying: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' qname: 'in-aDdR.itCONsuLt.neT.' qtype: 'NS' proto: 'udp'
[select][51359.02] => id: '56910' updating: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' with rtt 18 to srtt: 18 and variance: 4
[iterat][51359.02] <= rcode: NOERROR
[iterat][51359.02] <= retrying with non-minimized name
[iterat][51359.02] '213-133-203-34.newtel.in-addr.itconsult.net.' type 'PTR' new uid was assigned .03, parent uid .00
[select][51359.03] => id: '18773' choosing: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' with timeout 38 ms zone cut: 'itconsult.net.'
[resolv][51359.03] => id: '18773' querying: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' qname: '213-133-203-34.nEWtEL.IN-AdDr.ITcONsuLt.NEt.' qtype: 'PTR' proto: 'udp'
[select][51359.03] => id: '18773' updating: 'd.itconsult-dns.co.uk.'@'176.97.158.100#00053' zone cut: 'itconsult.net.' with rtt 16 to srtt: 18 and variance: 4
[iterat][51359.03] <= rcode: NOERROR
[valdtr][51359.03] >< cut changed, needs revalidation
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[plan ][51359.03] plan 'in-addr.itconsult.net.' type 'DS' uid [51359.04]
[iterat][51359.04] 'in-addr.itconsult.net.' type 'DS' new uid was assigned .05, parent uid .03
[cache ][51359.05] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.05] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.05] => NSEC3 sname: match proved NODATA, new TTL 43131
[iterat][51359.05] <= rcode: NOERROR
[valdtr][51359.05] <= parent: updating DS
[valdtr][51359.05] <= answer valid, OK
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[plan ][51359.03] plan 'in-addr.itconsult.net.' type 'DS' uid [51359.06]
[iterat][51359.06] 'in-addr.itconsult.net.' type 'DS' new uid was assigned .07, parent uid .03
[cache ][51359.07] => trying zone: itconsult.net., NSEC3, hash c75d4f37
[cache ][51359.07] => NSEC3 depth 1: hash 4je672clu0jh2pbkm6mdj2n4ps7e9t2h
[cache ][51359.07] => NSEC3 sname: match proved NODATA, new TTL 43131
[iterat][51359.07] <= rcode: NOERROR
[valdtr][51359.07] <= parent: updating DS
[valdtr][51359.07] <= answer valid, OK
[resolv][51359.03] => resuming yielded answer
[valdtr][51359.03] >< no valid RRSIGs found: 213-133-203-34.newtel.in-addr.itconsult.net. PTR (0 matching RRSIGs, 0 expired, 0 not yet valid, 0 invalid signer, 0 invalid label count, 0 invalid key, 0 invalid crypto, 0 invalid NSEC)
[valdtr][51359.03] <= continuous revalidation, fails
[cache ][51359.03] => not overwriting PTR 213-133-203-34.newtel.in-addr.itconsult.net.
[cache ][51359.03] => not overwriting PTR 213-133-203-34.newtel.in-addr.itconsult.net.
[dnssec] validation failure: 213-133-203-34.newtel.in-addr.itconsult.net. PTR
[resolv][51359.00] request failed, answering with empty SERVFAIL
[resolv][51359.03] finished in state: 8, queries: 2, mempool: 32800 B
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/675Build system: allow building with LeakSanitizer only2021-05-28T12:03:47+02:00Štěpán BalážikBuild system: allow building with LeakSanitizer onlyCurrently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot late...Currently, we can configure meson with `-Db_sanitize=address` which produces unnecessary slowdowns when one is only interested in detecting leaks. This slowdown is even more pronounced when running a replay in `rr` (which I do a lot lately 😏).https://gitlab.nic.cz/knot/knot-resolver/-/issues/672module dependencies2022-02-18T12:08:49+01:00Tomas Krizekmodule dependenciesOnce we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may...Once we support declarative configuration, we need to figure out what modules to load, when, and in which order. Some considerations:
- unlike lua config, the declarative one has no order of execution (and loading modules)
- modules may depend on other modules, either as a hard or soft requirement
- modules may want to detect whether their requirements are met
- modules should be loaded automatically depending on the chosen configuration
- there should be no conflict between the desired configuration and the running configuration (i.e. when a module tries to auto-load another module which the user explicitly disabled)https://gitlab.nic.cz/knot/knot-resolver/-/issues/658Fetch NS names and glue from both parent and child zones (in some way)2021-01-04T11:28:06+01:00Štěpán BalážikFetch NS names and glue from both parent and child zones (in some way)After !1097, Knot Resolver is properly parent-centric in the resolution.
I recently fixed `iter_pcnamech.rpl` in deckard!207 to actually test something and it requires a query to the child zone to discover a NS name/address to pass.
Mo...After !1097, Knot Resolver is properly parent-centric in the resolution.
I recently fixed `iter_pcnamech.rpl` in deckard!207 to actually test something and it requires a query to the child zone to discover a NS name/address to pass.
Moreover https://tools.ietf.org/html/draft-ietf-dnsop-ns-revalidation-00#section-3 points in the direction of querying the child zone as well.
Blocks deckard!207.https://gitlab.nic.cz/knot/knot-resolver/-/issues/654insufficient caching of some uncommon wildcards2020-12-11T09:46:52+01:00Vladimír Čunátvladimir.cunat@nic.czinsufficient caching of some uncommon wildcardsIn an NSEC3-signed zone, if a wildcard is nested deeper than directly under the apex, positive expansions from it may not be cached properly (but they succeed). Testing example: `foo.t.cunat.cz AAAA`.
The issue is that aggressive cache...In an NSEC3-signed zone, if a wildcard is nested deeper than directly under the apex, positive expansions from it may not be cached properly (but they succeed). Testing example: `foo.t.cunat.cz AAAA`.
The issue is that aggressive cache thinks it needs to additionally provide an NSEC3 record matching the closest (provable) encloser, but that's not true in this case (because the wildcard record proves encloser's existence). This NSEC3 record must exist but resolver probably hasn't obtained it, so synthesis from cache (usually) fails.
Fortunately, typical wildcard usage I see is directly under the apex `*.example.com`. We may also be "saved" by queries for non-existing types on the same name (e.g. AAAA), as those need this NSEC3 record and thus the only downside would be its "unneeded" addition into the corresponding positive wildcard expansions.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/651dnstap module spawns a thread2020-12-07T11:00:36+01:00Vladimír Čunátvladimir.cunat@nic.czdnstap module spawns a threadThat's not consistent with kresd architecture, though I can't think of a particular reason why it might cause a problem. Note that this thread will get spawned for each kresd process, so it might be a bit wasteful.
We might prefer to r...That's not consistent with kresd architecture, though I can't think of a particular reason why it might cause a problem. Note that this thread will get spawned for each kresd process, so it might be a bit wasteful.
We might prefer to rewrite the module by utilizing the shared libuv loop (to know when socket is ready to receive more data), but maybe the [fstrm tools](https://farsightsec.github.io/fstrm/overview.html) don't provide good support for that. If we drop the thread, this library might not be worth depending on anymore (as the framing is trivial).https://gitlab.nic.cz/knot/knot-resolver/-/issues/648server selection: implement a way to do asynchronous NS name resolution2020-11-30T14:11:28+01:00Štěpán Balážikserver selection: implement a way to do asynchronous NS name resolutionThe following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184348): (+6 comments)
> I do not see this flag in use. Is it inten...The following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184348): (+6 comments)
> I do not see this flag in use. Is it intentional?https://gitlab.nic.cz/knot/knot-resolver/-/issues/647server selection: collect and use TCP connection information2021-11-08T13:39:08+01:00Štěpán Balážikserver selection: collect and use TCP connection informationThe following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184337): (+3 comments)
> I'm either blind or it is not used anywher...The following discussion from !1030 should be addressed:
- [ ] @pspacek started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1030#note_184337): (+3 comments)
> I'm either blind or it is not used anywhere. Can you point me to the place where it gets used, please?
`tcp_waiting` and `tcp_connected` and respective function and its calls have been commented out (in 6ef74faf922c5962401747b5aa3a9e01e92e50ff) until we use this information in the server selection process.
This will ultimately be related to #629 for example.https://gitlab.nic.cz/knot/knot-resolver/-/issues/638[discussion] cache backend redesign2020-12-04T16:34:21+01:00Petr Špaček[discussion] cache backend redesignLet's discuss problems we have with current LMDB-based cache backend. We need to analyze if these are fixable or we need to redesign cache backend.
Problems with LMDB itself
- Database overfill leads to irrecoverable state where while D...Let's discuss problems we have with current LMDB-based cache backend. We need to analyze if these are fixable or we need to redesign cache backend.
Problems with LMDB itself
- Database overfill leads to irrecoverable state where while DB practically becomes read only and the only ways forward are either enlarge database or delete it. Together with inability to detect if committing a transaction will lead to this state prevents us from reliably keeping cache with constant size, leading to race conditions in overflow handling etc. (#605)
- Transactions have [undefined limits](https://lists.openldap.org/hyperkitty/list/openldap-technical@openldap.org/message/VI7K5NWV46J6DACITXVS7X2SM3HZIXVB/) on them, forcing us to [jump through hoops](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1042/diffs?commit_id=c651fbf24017f26435b86e69e9ce73c7f5976b97).
- LMDB depends on unique PID values - this assumption does not hold when sharing cache across containers (#637).
Other cache-related problems: #602, #604