Knot Resolver issueshttps://gitlab.nic.cz/knot/knot-resolver/-/issues2024-03-27T11:28:52+01:00https://gitlab.nic.cz/knot/knot-resolver/-/issues/908outdated versions in CI, Dockerfile, ...2024-03-27T11:28:52+01:00Vladimír Čunátvladimir.cunat@nic.czoutdated versions in CI, Dockerfile, ...Old versions that I see:
- [ ] Debian 11 in most of CI (still supported but 12 is the default stable now)
- [ ] Debian 11 in Dockerfile
- [ ] KNOT_VERSION 3.1 in CI + 3.2 a bit, but 3.3 is the current stable
We might also:
- [ ] increas...Old versions that I see:
- [ ] Debian 11 in most of CI (still supported but 12 is the default stable now)
- [ ] Debian 11 in Dockerfile
- [ ] KNOT_VERSION 3.1 in CI + 3.2 a bit, but 3.3 is the current stable
We might also:
- [ ] increase the lower bound on knot-dns version
* `>= 3.0.2` right now, but 3.1 is from 2021
* 3.0.x is the default in Debian 11 (current oldstable, still supported distro)6.1.0https://gitlab.nic.cz/knot/knot-resolver/-/issues/907CI tests for cross-compilation2024-03-22T13:36:52+01:00Oto ŠťávaCI tests for cross-compilationDiscussion from !1503:
- [ ] @ostava started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1503#note_295121): (+1 comment)
> Idea: what if we had a cross-compilation test in the CI? And I mean not necess...Discussion from !1503:
- [ ] @ostava started a [discussion](https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1503#note_295121): (+1 comment)
> Idea: what if we had a cross-compilation test in the CI? And I mean not necessarily for Turris, but let's say something like:
>
> * **Job 1:** cross-compile for ARM on x86
> * **Job 2:** run tests on the cross-compiled executable from **Job 1** on ARM
>
> Since we do have arm64 runners available, it *could* be easy enough to do?
Not urgent, just an idea for improvement.https://gitlab.nic.cz/knot/knot-resolver/-/issues/796docs: documentation for version 62024-03-19T12:23:57+01:00Aleš Mrázekdocs: documentation for version 6The goal is to have almost finished documentation for version 6.
Current documentation can be seen with [gitlab pages](https://www.knot-resolver.cz/documentation/latest). (generated on-demand from branches chosen by us)
# Step 1: Writi...The goal is to have almost finished documentation for version 6.
Current documentation can be seen with [gitlab pages](https://www.knot-resolver.cz/documentation/latest). (generated on-demand from branches chosen by us)
# Step 1: Writing the documentation
The structure of documentation is based on #776.
Some related comments can be found in !1377.
- [x] **Getting Started** section: installation, startup, initial configuration (examples)
- [ ] **Configuration** section: rewrite [pages about Lua configuration](https://knot.pages.nic.cz/knot-resolver/config-lua.html) with declarative configuration
- [ ] syntax and conventions (this might be already rewritten somewhere)
- [ ] modules
- [ ] networking
- [ ] performance and resiliency
- [ ] policy, access control and data manipulation
- [ ] logging, monitoring, diagnostics
- [ ] DNSSEC, data verification
- [ ] experimental features
- [ ] **Management** section
- [ ] HTTP API
- [x] kresctl utility
- [ ] **For operators** section
- [ ] upgrading to version 6
- [ ] **For developers** section
- [ ] internal architecture
- [x] **Deployment** guides
- [x] manual
- [x] systemd
- [x] docker
- [x] multiple instances
- [ ] extending the resolver
- [ ] create gitlab issues for all documentation sections that won't be fully completed with this MR
# Step 2: Collect and implement feedback
1. [ ] run spell checker
2. [ ] collect feedback from @vcunat
3. [ ] implement feedback
4. [ ] collect feedback from @llhotka
5. [ ] implement feedback
6. [ ] collect feedback from someone unrelated to the dev team (ODVR admins, someone random, ...)
7. [ ] implement feedback
Related !1377
Closes #7766.1.0Aleš MrázekAleš Mrázekhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/797DNS64 synthesis fails for tudelft.account.worldcat.org2024-03-11T22:27:53+01:00Ondřej CaletkaDNS64 synthesis fails for tudelft.account.worldcat.orgIn kresd version 5.6.0 with DNS64 module enabled, when resolving `tudelft.account.worldcat.org`, DNS64 does not kick in:
```
$ dig tudelft.account.worldcat.org a
; <<>> DiG 9.16.37 <<>> tudelft.account.worldcat.org a
;; global optio...In kresd version 5.6.0 with DNS64 module enabled, when resolving `tudelft.account.worldcat.org`, DNS64 does not kick in:
```
$ dig tudelft.account.worldcat.org a
; <<>> DiG 9.16.37 <<>> tudelft.account.worldcat.org a
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52064
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;tudelft.account.worldcat.org. IN A
;; ANSWER SECTION:
tudelft.account.worldcat.org. 2459 IN CNAME emea.account.worldcat.org.
emea.account.worldcat.org. 28 IN A 193.240.184.98
$ dig tudelft.account.worldcat.org aaaa
; <<>> DiG 9.16.37 <<>> tudelft.account.worldcat.org aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63626
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 4 (Forged Answer): (BHD4: DNS64 synthesis)
;; QUESTION SECTION:
;tudelft.account.worldcat.org. IN AAAA
;; AUTHORITY SECTION:
worldcat.org. 653 IN SOA michelle.ns.cloudflare.com. dns.cloudflare.com. 2312413286 10000 2400 604800 1800
```
The zone in question is hosted by Cloudflare and has DNSSEC enabled so my wild guess is that it has something to do with the way Cloudflare signs negative answers.https://gitlab.nic.cz/knot/knot-resolver/-/issues/906local-data: allow even with +nord2024-03-04T10:24:29+01:00Vladimír Čunátvladimir.cunat@nic.czlocal-data: allow even with +nordWhile it makes sense to disallow *cached* records in +nord mode by default (for privacy reasons), those arguments do not hold for other kinds of local data, and there might be some use cases, e.g. [resolver.arpa. RESINFO](https://www.iet...While it makes sense to disallow *cached* records in +nord mode by default (for privacy reasons), those arguments do not hold for other kinds of local data, and there might be some use cases, e.g. [resolver.arpa. RESINFO](https://www.ietf.org/archive/id/draft-ietf-add-resolver-info-11.html#section-3)https://gitlab.nic.cz/knot/knot-resolver/-/issues/886tmpfiles config is only installed with systemd_files enabled; should be indep...2024-03-02T16:57:29+01:00Antontmpfiles config is only installed with systemd_files enabled; should be independentI appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when buildi...I appreciate that libsystemd use and installing systemd_files are decoupled in this project.
However, the `systemd/tmpfiles.d/knot-resolver.conf.in` tmpfiles config only gets processed / installed if systemd_files is enabled when building.
There are distros which have a working tmpfiles provider but do not use systemd as the init system.
Enabling systemd_files is not the correct solution, as this also installs the other systemd files which is not desired on systems which do not actually use systemd as the init system.
Therefore, the coupling of tmpfiles support to systemd_files is incorrect.
In Gentoo, the `systemd-tmpfiles` binary is provided by the `sys-apps/systemd-utils` package.
```console
$ equery b $(which systemd-tmpfiles)
* Searching for /bin/systemd-tmpfiles ...
sys-apps/systemd-utils-254.8 (/bin/systemd-tmpfiles)
$ eix sys-apps/systemd-utils
[I] sys-apps/systemd-utils
Available versions: 254.5-r2^t 254.7^t (~)254.8^t {+acl boot kernel-install +kmod secureboot selinux split-usr sysusers test +tmpfiles +udev ukify ABI_MIPS="n32 n64 o32" ABI_S390="32 64" ABI_X86="32 64 x32" PYTHON_SINGLE_TARGET="python3_10 python3_11 python3_12"}
Installed versions: 254.8^t(05:00:50 25.12.2023)(kmod split-usr tmpfiles udev -acl -boot -kernel-install -secureboot -selinux -sysusers -test -ukify ABI_MIPS="-n32 -n64 -o32" ABI_S390="-32 -64" ABI_X86="64 -32 -x32" PYTHON_SINGLE_TARGET="python3_11 -python3_10 -python3_12")
Homepage: https://systemd.io/
Description: Utilities split out from systemd for OpenRC users
```
This package is depended upon by a few other crucial bits and it is therefore always installed on a Gentoo OpenRC system.
```console
$ equery d systemd-utils
* These packages depend on systemd-utils:
virtual/libudev-251-r2 (!systemd ? >=sys-apps/systemd-utils-251[udev,abi_x86_32(-)?,abi_x86_64(-)?,abi_x86_x32(-)?,abi_mips_n32(-)?,abi_mips_n64(-)?,abi_mips_o32(-)?,abi_s390_32(-)?,abi_s390_64(-)?])
virtual/tmpfiles-0-r5 (!systemd ? sys-apps/systemd-utils[tmpfiles])
virtual/udev-217-r7 (!systemd ? sys-apps/systemd-utils[udev])
```
I suspect Alpine and [other non-systemd distros](https://ungleich.ch/en-us/cms/blog/2019/05/20/linux-distros-without-systemd/) also run into this.
I do not think this is a distro issue since the underlying assumption of the build scripts that a full systemd installation is the only possible tmpfiles provider does not hold true.
In this case we do not need the unit or other files for systemd since the init system is OpenRC but we do support and would like to have the tmpfiles config.
So ideally these should be decoupled from each other with enabling systemd_files also enabling the tmpfiles config but enabling tmpfiles being available independently of enabling systemd_files.
I am happy to submit a patch if there is consensus that this is the correct approach.https://gitlab.nic.cz/knot/knot-resolver/-/issues/905Referral is sometimes sent in place of answer to DoH client with DNS64 enabled2024-02-29T00:00:00+01:00Ondřej CaletkaReferral is sometimes sent in place of answer to DoH client with DNS64 enabledIn my setup, it happens from time to time that Knot Resolver provides wrong
answer to a DoH client querying A record of an IPv4-only name when DNS64 module
is active. It happens only when these conditions are met:
- queried name is an ...In my setup, it happens from time to time that Knot Resolver provides wrong
answer to a DoH client querying A record of an IPv4-only name when DNS64 module
is active. It happens only when these conditions are met:
- queried name is an apex name with `A` but no `AAAA` record
- `dns64` module is loaded
- queried rrset nor the nsset of the zone is in cache
- client is using `doh2` and asking **concurrently** for `A` and `AAAA` record (the queries can come via completely independent HTTP/2 sessions though)
If all these conditions are fulfilled, then Knot resolver sometimes answers the
A query with referral received from parent zone of the queried name. I was able
to reproduce the issue on these names:
- `github.com`
- `duckduckgo.com`
- `liberec.cz`
- `ipv4only.arpa`
Steps to reproduce
------------------
I reproduce the issue on a Knot Resolver 5.7.1 installed from EPEL repository on Fedora 39 with this configuration:
(cache size is set to lowest possible value to increase the probability of hitting the issue)
```
modules = {'dns64'}
net.listen('::1', 443, { kind = 'doh2' })
cache.size = 32768
user('knot-resolver','knot-resolver')
```
I use this script to keep repeating queries using [doh](https://github.com/curl/doh) utility until `A` records are missing from the response. That happens at most after ca. 15 minutes:
```
#!/bin/bash
domain=${1-github.com}
# Enable debugging
socat - unix-connect:/run/knot-resolver/control/1 <<EOF
policy.add(policy.suffix(policy.DEBUG_ALWAYS, policy.todnames({'$domain'})))
EOF
while true;
do
date
out="$(doh -k $domain https://[::1]/dns-query)";
echo "$out";
grep -q "^A:" <<<"$out" || break;
sleep 1;
done
date
```
I was not able to reproduce the issue using `kdig` tool, possibly because it sends queries sequentially and my shell was not fast enough to spawn second instance of `kdig` before the first one finishes.
Packet capture of the issue
---------------------------
I am attaching a [packet capture](/uploads/8892559730939bc8cbfc2b61539ec53d/packet_capture.pcap) together with [TLS key log](/uploads/b14a7fbc6bcdf63b2db8c1725b9517a4/tls_keys.txt), as well as [kreds syslogs](/uploads/2bd8429492f0d08fcffdd7eef5edfe73/syslog.txt) of the issue
demonstrated when querying `ipv4only.arpa`. The issue is very well visible with
Wireshark filter set to: `lower(dns.qry.name) == "ipv4only.arpa"`
Packets 31 - 188 show correct behavior, packets 256 - 422 show the issue,
particularly packet 359 which contains referral from packet 354 instead of
answer from packet 417:
```
No. Protocol Info
31 DoH Standard query 0x0000 A ipv4only.arpa
36 DoH Standard query 0x0000 AAAA ipv4only.arpa
65 DNS Standard query 0x53fb AAAA ipV4oNlY.arpa OPT
66 DNS Standard query 0x9e3d A iPv4onLY.ARPA OPT
67 DNS Standard query response 0x53fb AAAA ipV4oNlY.arpa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
69 DNS Standard query response 0x9e3d A iPv4onLY.ARPA NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
108 DNS Standard query 0xb804 AAAA iPV4oNLY.aRpa OPT
124 DNS Standard query response 0xb804 AAAA iPV4oNLY.aRpa SOA sns.dns.icann.org OPT
142 DNS Standard query 0x4de9 A Ipv4onlY.aRPa OPT
144 DNS Standard query response 0x4de9 A Ipv4onlY.aRPa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
174 DNS Standard query 0xc998 A IpV4oNly.ARPa OPT
179 DNS Standard query response 0xc998 A IpV4oNly.ARPa A 192.0.0.170 A 192.0.0.171 NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org OPT
184 DoH Standard query response 0x0000 AAAA ipv4only.arpa AAAA 64:ff9b::c000:aa AAAA 64:ff9b::c000:ab SOA sns.dns.icann.org
188 DoH Standard query response 0x0000 A ipv4only.arpa A 192.0.0.170 A 192.0.0.171
256 DoH Standard query 0x0000 A ipv4only.arpa
261 DoH Standard query 0x0000 AAAA ipv4only.arpa
287 DNS Standard query 0x23b6 AAAA ipV4oNlY.arPa OPT
288 DNS Standard query 0x8503 A IpV4ONLy.ARpA OPT
292 DNS Standard query response 0x23b6 AAAA ipV4oNlY.arPa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
293 DNS Standard query response 0x8503 A IpV4ONLy.ARpA NS b.iana-servers.net NS ns.icann.org NS a.iana-servers.net NS c.iana-servers.net NSEC iris.arpa RRSIG OPT
328 DNS Standard query 0x4ab4 AAAA iPV4ONLy.arpa OPT
330 DNS Standard query response 0x4ab4 AAAA iPV4ONLy.arpa SOA sns.dns.icann.org OPT
350 DNS Standard query 0x17fa A ipv4ONLY.ARpa OPT
354 DNS Standard query response 0x17fa A ipv4ONLY.ARpa NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org NSEC iris.arpa RRSIG OPT
359 DoH Standard query response 0x0000 A ipv4only.arpa NS ns.icann.org NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net
407 DNS Standard query 0x0f40 A IPv4oNly.arpA OPT
417 DNS Standard query response 0x0f40 A IPv4oNly.arpA A 192.0.0.170 A 192.0.0.171 NS a.iana-servers.net NS b.iana-servers.net NS c.iana-servers.net NS ns.icann.org OPT
422 DoH Standard query response 0x0000 AAAA ipv4only.arpa AAAA 64:ff9b::c000:aa AAAA 64:ff9b::c000:ab SOA sns.dns.icann.org
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/535declarative policy module and other user-supplied DNS data2024-02-28T12:15:39+01:00Petr Špačekdeclarative policy module and other user-supplied DNS dataCurrent problem
---------------
Our current imperative policy module is using chain of Lua functions: This is quite slow and hard to use for non-programmers.
Proposal
--------
Design a new method to configure "policies", preferably in a...Current problem
---------------
Our current imperative policy module is using chain of Lua functions: This is quite slow and hard to use for non-programmers.
Proposal
--------
Design a new method to configure "policies", preferably in a declarative way. By "policies" I mean a generic way to influence resolving and inject user-supplied data into DNS tree or block other stuff.
A declarative way should be more intuitive to use than writing Lua functions, and also faster if we design it right.
Here is incomplete list of stuff we might want to express.
- [x] ability to also block sub-queries, e.g. when following CNAMEs (#217)
- [ ] ability to block RR data - e.g. rebinding protection, blacklist of NS names etc. (#523)
- [x] ACLs (including negative ACLs, #370)
- [x] merge views with other policies (see also #445)?
- [x] redirecting specific zones to user-configured servers (#428, !651)
- [ ] beware that we need also port number, not just IP address
- [x] theoretical "helper" NS+glue records from kresd config should not be retrievable from outside
- FORWARDing
- TLS forwarding has many knobs and might need even more: #481
- do we still need STUB policy? If so see #218
- FORWARDing might need exceptions for some subtrees (see e.g. https://lists.nlnetlabs.nl/pipermail/unbound-users/2019-December/006560.html)
- generally special EDNS tricks: #314, #303; also improve #657
- special cache semantics (do not cache this sub-tree, limit TTL in this sub-tree)
- maybe DNS64 module should be merged with policies and ACLs: #368
- [x] maybe hints module should be merged in as well (see also #205, #349)
- [x] maybe also a way to provide other user-supplied data - #540
* (well, more ways can always be added)
- maybe prefill module should be merged as well (see also #417)
- think of interaction with daf module (beware of #183)
* `@vcunat` would prefer to deprecate DAF,
but theoretically we could think of translating DAF rules into the new policy rules
- design should be able to support full strength of RPZ (example of a problem: #194)
* the most common features are in 6.0.x – CNAME redirection in particular, and interacting well with other rules (multiple rules of different kinds can trigger when jumping through CNAME chains)
- design needs to support efficient mechanism which mimicks RPZ with zone transfer including IXFR(!) (#195)
- build mechanism for better visibility into policies (#364)
- it needs to work with huge lists (apparently users want to have long block lists, see https://lists.nlnetlabs.nl/pipermail/unbound-users/2019-December/006559.html)
* improved in 6.0.x: shared inside LMDB across all processes, but efficiency of restarts/reloads/updates could be significantly improved (as of 6.0.6)
- [x] open question: at which stage should the module kick in? Can it be e.g. used to implement `ignore-cd-flag` policy as seen in Unbound?
* the `view:` part can be used to set such options, though there's no ignore-cd in particular so far
- per-domain setting for rate-limits e.g. like `ratelimit-below-domain`, `ratelimit-for-domain` etc. like in Unbound
* [ ] first per-user changes in rate-limits in `views:` (when we have any rate-limiting)
- [x] special handling for reserved and local-only names: see #205 and think it through2020 Q2https://gitlab.nic.cz/knot/knot-resolver/-/issues/194support RPZ CNAME redirection2024-02-28T12:14:34+01:00Petr Špačeksupport RPZ CNAME redirectionRedirection using CNAMEs from RPZ is useful for redirecting blocked domains to an user-controlled domain. The user-controlled domain can e.g. have SPF record which blocks e-mails on SMTP level, which helps with phising and spam domains a...Redirection using CNAMEs from RPZ is useful for redirecting blocked domains to an user-controlled domain. The user-controlled domain can e.g. have SPF record which blocks e-mails on SMTP level, which helps with phising and spam domains a lot.https://gitlab.nic.cz/knot/knot-resolver/-/issues/417support prefilling for arbitrary zone2024-02-28T12:12:23+01:00Petr Špačeksupport prefilling for arbitrary zoneUlrich from IIS requested feature which would allow them to prefill resolver's cache with arbitrary zone, i.e. not only root zone.
Technical note:
Simple removal of checks for zone name does not work because `DS` records are missing in ...Ulrich from IIS requested feature which would allow them to prefill resolver's cache with arbitrary zone, i.e. not only root zone.
Technical note:
Simple removal of checks for zone name does not work because `DS` records are missing in cache and this lead to failing validation. Maybe we can just wrap import in a function which requests `DS` and calls import from query callback?https://gitlab.nic.cz/knot/knot-resolver/-/issues/218dns64 is broken with policy.STUB2024-02-28T12:09:18+01:00Vladimír Čunátvladimir.cunat@nic.czdns64 is broken with policy.STUBSee e.g. 0b748e0e49. Related: https://gitlab.nic.cz/knot/knot-resolver/issues/217See e.g. 0b748e0e49. Related: https://gitlab.nic.cz/knot/knot-resolver/issues/217https://gitlab.nic.cz/knot/knot-resolver/-/issues/428Overwrite Nameserver (STUB?)2024-02-28T12:08:27+01:00DenisOverwrite Nameserver (STUB?)Hi,
(IPs, hostnames and domains are fictional, but realistic.)
Description
===========
my goal is to setup a local nameserver (knot, the machine is known as vanadium, IP 192.168.5.3) and a local kresd (on machine palstek, 192.168.5.2)...Hi,
(IPs, hostnames and domains are fictional, but realistic.)
Description
===========
my goal is to setup a local nameserver (knot, the machine is known as vanadium, IP 192.168.5.3) and a local kresd (on machine palstek, 192.168.5.2) to use this nameserver for any local-domains (example.org). So, for a request www.example.org, kresd should use vanadium as nameserver, instead of the public nameserver (a.iana-servers.net..., 199.43.135.53...).
The differences to #349 is, nameserver instead of hints.
My first try was to use policy.STUB: `policy.add( policy.suffix( policy.STUB( "192.168.5.2"), {todname('example.org.')}))`
But vanadium is a non-recursive server, but STUB expects a recursive. So `troja CNAME www.heise.de.` on vanadium, will be unresolved; kresd will not follow this CNAME. A local CNAME `lieschen CNAME mueller` works fine.
We have problems with some clients, which expects an A-record, not a CNAME. I would expect, that a recursive DNS-Server should follow the CNAME and it does it, if it is a not the example.org.-zone.
Configs for STUB-example
========================
Zone
----
```zone
$TTL 60
@ SOA vanadium.example.org root.example.org ( 2018121103 28800 14400 3600000 86400 )
NS vanadium
A 192.168.5.12
troja CNAME www.heise.de.
mueller A 192.168.5.24
lieschen CNAME mueller
```
kresd.conf
----------
```lua
user( 'knot-resolver','knot-resolver')
cache.size = 1*GB
modules = { 'policy', 'stats', 'predict' }
verbose(true)
predict.config(20, 72)
policy.add( policy.all( policy.QTRACE))
policy.add( policy.suffix( policy.STUB( "10.91.53.3"), {
todname('example.org.')
}))
```
Tests for STUB-example
======================
Simple A-Record, no problems:
```sh
# dig mueller.example.org
;; ANSWER SECTION:
mueller.example.org. 60 IN A 192.168.5.24
```
CNAME, the non-recursive-Server already followes the record:
```sh
# dig lieschen.example.org
;; ANSWER SECTION:
lieschen.example.org. 60 IN CNAME mueller.example.org.
mueller.example.org. 60 IN A 192.168.5.24
```
The unfollowed CNAME-record:
```sh
# dig troja.example.org
troja.example.org. 60 IN CNAME www.heise.de.
```
I had expect as answer also the A-Record `www.heise.de. 86400 IN A 193.99.144.85`
So, STUB seems not to be the correct solution for my goal. Also the description of stub-dns is different to my goal.
* But how I can overwrite the Nameserver of my domain in kresd.conf?
* Or how it is possible to use STUB, but kresd tries to follow CNAMES, if A-record was not provided?
BR
Denishttps://gitlab.nic.cz/knot/knot-resolver/-/issues/370support negative ACLs2024-02-28T12:06:31+01:00Petr Špačeksupport negative ACLsAn operator from CSNOG 1 asked for ability to use negative ACL, i.e. something like
```
view:notaddr('10.0.0.1', policy.suffix(policy.TC, {'\7example\3com'}))
```
to apply policy to all clients **not** having IP address `10.0.0.1`.
Ques...An operator from CSNOG 1 asked for ability to use negative ACL, i.e. something like
```
view:notaddr('10.0.0.1', policy.suffix(policy.TC, {'\7example\3com'}))
```
to apply policy to all clients **not** having IP address `10.0.0.1`.
Question here is how it should be configured and if we should extract
ACL logic to some other place. Related: #368https://gitlab.nic.cz/knot/knot-resolver/-/issues/217policies and sub-queries2024-02-28T12:05:31+01:00Vladimír Čunátvladimir.cunat@nic.czpolicies and sub-queriesPolicies are currently only applied to full requests, i.e. when `begin` happens for layers. We do copy the flag and server list when creating sub-queries, but:
- not everywhere, e.g. the dns64 module is broken in this respect;
- the sub...Policies are currently only applied to full requests, i.e. when `begin` happens for layers. We do copy the flag and server list when creating sub-queries, but:
- not everywhere, e.g. the dns64 module is broken in this respect;
- the subquery might be for a name that the policy should apply differently, e.g. users attempting to handle different parts of the DNS tree differently. A similar situation is on CNAME jumps, as those may also lead to a different part of the tree.https://gitlab.nic.cz/knot/knot-resolver/-/issues/205implement reserved domains properly2024-02-28T12:04:06+01:00Vladimír Čunátvladimir.cunat@nic.czimplement reserved domains properlyhttps://tools.ietf.org/html/rfc6761#section-6
At least the `**.localhost` entries might be reasonable to override by the hints module (`/etc/hosts`), so that might be the proper place for implementation.
- [x] We should also consider a...https://tools.ietf.org/html/rfc6761#section-6
At least the `**.localhost` entries might be reasonable to override by the hints module (`/etc/hosts`), so that might be the proper place for implementation.
- [x] We should also consider automatically loading some modules by default (e.g. policy), because they implement functionality that is **mandatory** for resolvers. *Loaded by default since 2.0.0*.https://gitlab.nic.cz/knot/knot-resolver/-/issues/362EDNS-Client-Subnet (ECS) Support in Knot Resolver2024-02-26T19:32:26+01:00ImanEDNS-Client-Subnet (ECS) Support in Knot ResolverHi All,
Is EDNS-Client-Subnet (ECS) already supported in Knot Resolver? if yes how much this feature is configurable?
For example, could we configure in which queries to authoritative servers should knot resolver append ECS option?
Th...Hi All,
Is EDNS-Client-Subnet (ECS) already supported in Knot Resolver? if yes how much this feature is configurable?
For example, could we configure in which queries to authoritative servers should knot resolver append ECS option?
Thanks in advance.https://gitlab.nic.cz/knot/knot-resolver/-/issues/904Can't run knot-resolver in docker on macOS2024-02-26T10:14:03+01:00Max MakarovCan't run knot-resolver in docker on macOS```bash
docker run -it --rm cznic/knot-resolver:6
```
```
Unable to find image 'cznic/knot-resolver:6' locally
6: Pulling from cznic/knot-resolver
5d0aeceef7ee: Download complete
cb5ec940ca82: Download complete
754e8ab812f6: Download co...```bash
docker run -it --rm cznic/knot-resolver:6
```
```
Unable to find image 'cznic/knot-resolver:6' locally
6: Pulling from cznic/knot-resolver
5d0aeceef7ee: Download complete
cb5ec940ca82: Download complete
754e8ab812f6: Download complete
835d0e91cb7d: Download complete
160af10ec0b1: Download complete
Digest: sha256:85f59f84ff786ed2d93a7efe94e1b21b5dc5f0b5c76d5dfd029e5b03bd69d803
Status: Downloaded newer image for cznic/knot-resolver:6
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
2024-02-25 12:54:47,649 manager[1]: [DEBUG] asyncio: Using selector: EpollSelector
2024-02-25 12:54:47,651 manager[1]: [INFO] knot_resolver_manager.server: Loading configuration from '/config/config.yaml' file.
2024-02-25 12:54:47,655 manager[1]: [DEBUG] knot_resolver_manager.server: Changing working directory to '/var/run/knot-resolver'.
2024-02-25 12:54:47,658 manager[1]: [WARNING] knot_resolver_manager.log: Changing logging level to 'INFO'
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Starting service manager auto-selection...
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Available subprocess controllers are ('supervisord',)
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller: Selected controller 'supervisord'
2024-02-25 12:54:47,658 manager[1]: [INFO] knot_resolver_manager.kresd_controller.supervisord: We want supervisord to restart us when needed, we will therefore exec() it and let it start us again.
[supervisord]
pidfile = supervisord.pid
directory = /run/knot-resolver
nodaemon = true
logfile = /dev/null
logfile_maxbytes = 0
silent = true
loglevel = info
[unix_http_server]
file = supervisord.sock
[supervisorctl]
serverurl = unix://supervisord.sock
[rpcinterface:patch_logger]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.patch_logger:inject
target = stdout
[rpcinterface:manager_integration]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.manager_integration:inject
[rpcinterface:sd_notify]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.sd_notify:inject
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[rpcinterface:fast]
supervisor.rpcinterface_factory = knot_resolver_manager.kresd_controller.supervisord.plugin.fast_rpcinterface:make_main_rpcinterface
[program:manager]
redirect_stderr=false
directory=/
command="/usr/bin/python3" "/usr/bin/python3" "/usr/bin/knot-resolver" "-c" "/config/config.yaml"
stopsignal=SIGINT
killasgroup=true
autorestart=true
autostart=true
startsecs=5
environment=X-SUPERVISORD-TYPE=notify,KRES_SUPRESS_LOG_PREFIX=true
stdout_logfile=NONE
stderr_logfile=NONE
[program:kresd]
process_name=%(program_name)s%(process_num)d
numprocs=161
directory=/run/knot-resolver
command=/usr/sbin/kresd -c kresd%(process_num)d.conf -n
autostart=false
autorestart=true
stopsignal=TERM
killasgroup=true
startsecs=10
environment=SYSTEMD_INSTANCE="%(process_num)d",X-SUPERVISORD-TYPE=notify
stdout_logfile=NONE
stderr_logfile=NONE
[program:cache-gc]
redirect_stderr=false
directory=/run/knot-resolver
command=/usr/sbin/kres-cache-gc -c /var/cache/knot-resolver -d 1000 -u 80 -f 10 -l 100 -L 200 -t 0 -m 0 -w 0
autostart=false
autorestart=true
stopsignal=TERM
killasgroup=true
startsecs=0
environment=
stdout_logfile=NONE
stderr_logfile=NONE
2024-02-25 12:54:47,662 manager[1]: [INFO] knot_resolver_manager.server: Exec requested with arguments: ['/usr/bin/supervisord', 'supervisord', '--configuration', '/run/knot-resolver/supervisord.conf']
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'patch_logger' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'manager_integration' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'sd_notify' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'supervisor' initialized
2024-02-25 12:54:47,887 supervisor[1]: [INFO] RPC interface 'fast' initialized
2024-02-25 12:54:47,887 supervisor[1]: [CRIT] Server 'unix_http_server' running without any HTTP authentication checking
2024-02-25 12:54:47,887 supervisor[1]: [INFO] supervisord started with pid 1
2024-02-25 12:54:47,887 supervisor[1]: [INFO] notify: injected $NOTIFY_SOCKET into event loop
2024-02-25 12:54:48,898 supervisor[1]: [INFO] spawned: 'manager' with pid 18
2024-02-25 12:54:48,993 manager[18] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:48,997 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:50,007 supervisor[1]: [INFO] spawned: 'manager' with pid 24
2024-02-25 12:54:50,103 manager[24] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:50,107 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:52,123 supervisor[1]: [INFO] spawned: 'manager' with pid 30
2024-02-25 12:54:52,217 manager[30] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:52,220 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:55,234 supervisor[1]: [INFO] spawned: 'manager' with pid 36
2024-02-25 12:54:55,299 manager[36] (stderr): SyntaxError: Non-UTF-8 code starting with '\x80' in file /usr/bin/python3 on line 2, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
2024-02-25 12:54:55,303 supervisor[1]: [INFO] exited: manager (exit status 1; not expected)
2024-02-25 12:54:56,313 supervisor[1]: [CRIT] manager process entered FATAL state! Shutting down
2024-02-25 12:54:56,314 supervisor[1]: [INFO] gave up: manager entered FATAL state, too many start retries too quickly
```https://gitlab.nic.cz/knot/knot-resolver/-/issues/902Specifying port for authoritative DNS causes Knot resolver to fail to start2024-02-23T10:02:09+01:00Pavel ŠvecSpecifying port for authoritative DNS causes Knot resolver to fail to startJust installed Knot 6.x, trying to configure it with internal DNS server (PowerDNS) running on the same server on port 5353.
**This configuration works just fine:** (does not translate addresses but besides the point - Knot resolver sta...Just installed Knot 6.x, trying to configure it with internal DNS server (PowerDNS) running on the same server on port 5353.
**This configuration works just fine:** (does not translate addresses but besides the point - Knot resolver starts)
```
rundir: /run/knot-resolver
workers: 2
cache:
storage: /var/cache/knot-resolver
logging:
level: info
network:
listen:
- interface: 1.2.3.4@53
management:
unix-socket: /run/knot-resolver/manager.sock
forward:
- subtree: .
servers:
- address: [ 8.8.8.8, 1.1.1.1 ]
- subtree:
- internaldomain.com
- veryinternaldomain.eu
- in-addr.arpa
servers: [ 127.0.0.1 ]
options:
authoritative: true
dnssec: false
```
**This configuration fails to start:**
```
rundir: /run/knot-resolver
workers: 2
cache:
storage: /var/cache/knot-resolver
logging:
level: info
network:
listen:
- interface: 1.2.3.4@53
management:
unix-socket: /run/knot-resolver/manager.sock
forward:
- subtree: .
servers:
- address: [ 8.8.8.8, 1.1.1.1 ]
- subtree:
- internaldomain.com
- veryinternaldomain.eu
- in-addr.arpa
servers: [ 127.0.0.1@5353 ]
options:
authoritative: true
dnssec: false
```
**Error message**
```
2024-02-22 18:10:39,660 manager[1073498]: [ERROR] knot_resolver_manager.kres_manager: Kresd with the new config failed to start, rejecting config
2024-02-22 18:10:39,660 manager[1073498]: [ERROR] knot_resolver_manager.server: Initial config verification failed with error: canary kresd process failed to start. Config might be invalid.
```
Yet running `kresctl validate config.yaml` yields no error messages and returns code = 0
```
[root@cradns02 knot-resolver]# kresctl validate config.yaml
[root@cradns02 knot-resolver]# echo $?
0
```
`kresctl convert config.yaml`, comparing them afterwards yields differences only on below lines which seem very compatible to me:
```
[root@cradns02 knot-resolver]# diff broken.lua ok.lua
159,161c159,161
< policy.rule_forward_add('internaldomain.com',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
< policy.rule_forward_add('veryinternaldomain.eu',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
< policy.rule_forward_add('in-addr.arpa',{dnssec=false,auth=true},{{'127.0.0.1@5353'},})
---
> policy.rule_forward_add('internaldomain.com',{dnssec=false,auth=true},{{'127.0.0.1'},})
> policy.rule_forward_add('veryinternaldomain.eu',{dnssec=false,auth=true},{{'127.0.0.1'},})
> policy.rule_forward_add('in-addr.arpa',{dnssec=false,auth=true},{{'127.0.0.1'},})
```
Based on https://knot.pages.nic.cz/knot-resolver/config-forward.html, forwarder should support custom port numbers?
Anyway, if anyone would struggle with same problem I was able to workaround the problem using `iptables -t nat -A OUTPUT -o lo -s 127.0.0.1 -p tcp --dport 53 -j REDIRECT --to-ports 5353` - the idea is to use NAT to change destination port to 5353 whenever request for DNS translation comes from localhost (= Knot resolver in our case). Not ideal solution but seems to work for now.Vladimír Čunátvladimir.cunat@nic.czVladimír Čunátvladimir.cunat@nic.czhttps://gitlab.nic.cz/knot/knot-resolver/-/issues/901Cross-domain CNAME records are not being resolved to IP addreses2024-02-22T16:39:43+01:00Pavel ŠvecCross-domain CNAME records are not being resolved to IP addresesIn a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to exter...In a pursuit of DNS management automation (DNS management via web UI / HTTP API), we've chosen Knot for resolver. But seems to lack (or could not find in docs) a feature which would allow us to create CNAME records from internal to external zones. We're currently using Bind where following works fine:
```service1.internal.eu. IN CNAME publicservice.external.com.```
**What I'd expect is**: Knot resolver asks our internal authoritative DNS (PowerDNS) for `service1.internal.eu.`, returning a CNAME `publicservice.external.com.` if CNAME suffix/pattern is not matched by other policies, then attempted to ask public DNS (like 8.8.8.8, 1.1.1.1, ...) for an IP address resolution, returning result to a client.
**What's happening is**: Knot resolver asks our internal authoritative DNS for `service1.internal.eu.`, returning CNAME `publicservice.external.com.` and satisfied forwards back to client unresolved.
Other queries to internal domains seem to work fine (incl. ones defined as
```
service1.internal.eu. IN CNAME service1a.internal.eu.
service1a.internal.eu. IN A 1.2.3.4
```
)
Reason why we do it this way is because we want to give "public" (read: cloud-based) service used internally a meaningful name instead of something like `auiewrthuiasdvbjas123juiahgi.cloudfront.net`, managed internally or we simply don't know the public IP of a service - sort of similar case really when service is publicly proxied by CloudFlare or similar service and therefore we'd have to check `A` record every once in a while if it changed or not.
Contents of /etc/knot-resolver/kresd.conf
```-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Network interface configuration
net.listen('1.2.3.4', 53, { kind = 'dns' })
-- Logging
log_level('debug')
log_target('stdout')
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'view', -- restrict IP adresses
}
-- Cache size
cache.size = 100 * MB
internalDomains = policy.todnames({'internal.eu.', 'veryinternal.eu.','in-addr.arpa.'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.0.0.1@5353'}), internalDomains))
policy.add(policy.pattern(policy.FORWARD({'8.8.8.8'}), '.*'))```https://gitlab.nic.cz/knot/knot-resolver/-/issues/802getting timeout when resolving retail.mobile.lbi.santander.uk2024-02-20T16:23:47+01:00Petr Jelinekgetting timeout when resolving retail.mobile.lbi.santander.ukI've faced this issue on my Turris Omnia and found out that it is caused by knotd. I have tried to run it on docker (out of my "turris" network).
As you can see, when I dig this domain, it works fine:
```
$ dig @1.1.1.1 retail.mobile.l...I've faced this issue on my Turris Omnia and found out that it is caused by knotd. I have tried to run it on docker (out of my "turris" network).
As you can see, when I dig this domain, it works fine:
```
$ dig @1.1.1.1 retail.mobile.lbi.santander.uk a
; <<>> DiG 9.18.8 <<>> @1.1.1.1 retail.mobile.lbi.santander.uk a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40183
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;retail.mobile.lbi.santander.uk. IN A
;; ANSWER SECTION:
retail.mobile.lbi.santander.uk. 108 IN A 193.127.211.80
;; Query time: 3 msec
;; SERVER: 1.1.1.1#53(1.1.1.1) (UDP)
;; WHEN: Thu Jul 13 10:48:06 BST 2023
;; MSG SIZE rcvd: 75
```
kdig fails:
```
$ docker run --rm cznic/knot kdig @1.1.1.1 retail.mobile.lbi.santander.uk SOA +dnssec
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; WARNING: response timeout for 1.1.1.1@53(UDP)
;; ERROR: failed to query server 1.1.1.1@53(UDP)
```
...however it works fine for other domains:
```
$ docker run --rm cznic/knot kdig @1.1.1.1 nic.cz SOA +dnssec
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 26938
;; Flags: qr rd ra ad; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 1
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1232 B; ext-rcode: NOERROR
;; QUESTION SECTION:
;; nic.cz. IN SOA
;; ANSWER SECTION:
nic.cz. 1800 IN SOA a.ns.nic.cz. hostmaster.nic.cz. 1689235477 14400 3600 1209600 7200
nic.cz. 1800 IN RRSIG SOA 13 2 1800 20230727080427 20230713063427 36959 nic.cz. EBzkqEHwKlzsDIfb6Q5pPQ6szq4RFQfr2TfSpMqMzpizy/xSAfn3RsX/4q0lIVUODwY3sqgNyYXOFkDdHIYnNw==
;; Received 189 B
;; Time 2023-07-13 09:48:38 UTC
;; From 1.1.1.1@53(UDP) in 28.2 ms
```