Knot DNS issueshttps://gitlab.nic.cz/knot/knot-dns/-/issues2019-06-07T10:36:17+02:00https://gitlab.nic.cz/knot/knot-dns/-/issues/646Wrong processing of multiple $INCLUDE directives2019-06-07T10:36:17+02:00Ondřej CaletkaWrong processing of multiple $INCLUDE directivesLet there be a zone file named `tul.cz.zone`:
```
$TTL 24h
$ORIGIN tul.cz.
@ IN SOA bubo.tul.cz. satrapa.bubo.tul.cz. (
201905290
3600
...Let there be a zone file named `tul.cz.zone`:
```
$TTL 24h
$ORIGIN tul.cz.
@ IN SOA bubo.tul.cz. satrapa.bubo.tul.cz. (
201905290
3600
1800
2419200
86400
)
IN NS bubo.tul.cz.
IN NS tul.cesnet.cz.
IN MX 0 bubo.tul.cz.
IN MX 50 tul.cesnet.cz.
$INCLUDE "adm.tul.cz.zone";
$INCLUDE "is.tul.cz.zone";
web IN A 147.230.16.27
www IN CNAME web.tul.cz.
```
This file includes two zonelets, `is.tul.cz.zone`:
```
$ORIGIN is.tul.cz.
c200ms IN A 147.230.89.208
```
and `adm.tul.cz.zone`:
```
$ORIGIN adm.tul.cz.
G-MaR IN A 147.230.13.18
```
If the zonelets are included in the order shown above, Knot refuses to load the zone:
```
# kzonecheck tul.cz.zone -v
error: [tul.cz.] zone loader, fatal error in zone, file 'tul.cz.zone', line 20 (file open error)
error: [tul.cz.] zone loader, failed to load zone, file 'tul.cz.zone', 1 errors
Failed to run semantic checks (failed)
```
Strace shows there's something fishy with the file name:
```
# strace kzonecheck tul.cz.zone
…
open("/etc/knot/adm.tul.cz.zone", O_RDONLY) = 4
…
open("/etc/knot/is.tul.cz.zonee", O_RDONLY) = -1 ENOENT (No such file or directory)
…
```
Please note the double `e` in the second filename (which is one character shorter). Changing the order of the includes so the shorter file name comes first works around the issue.2.8Daniel SalzmanDaniel Salzmanhttps://gitlab.nic.cz/knot/knot-dns/-/issues/624ZSK rollover removes the old key too early2018-11-22T14:42:21+01:00Ondřej CaletkaZSK rollover removes the old key too earlyOn Debian Linux with Knot 2.6.9 (upstream old stable package), one of my zones went bogus today.
The zone uses default TTL of 86400. DNSSEC policy timers are mostly on defaults:
```
- id: ecdsa
algorithm: ecdsap256sha256
zsk...On Debian Linux with Knot 2.6.9 (upstream old stable package), one of my zones went bogus today.
The zone uses default TTL of 86400. DNSSEC policy timers are mostly on defaults:
```
- id: ecdsa
algorithm: ecdsap256sha256
zsk-lifetime: 30d
rrsig-lifetime: 30d
rrsig-refresh: 15d
nsec3: on
```
The rollover went like this:
```
Nov 21 15:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, ZSK rollover started
Nov 21 15:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37015, algorithm ECDSAP256SHA256, KSK, public, active
Nov 21 15:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37711, algorithm ECDSAP256SHA256, public
Nov 21 15:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 39969, algorithm ECDSAP256SHA256, public, active
Nov 21 15:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, successfully signed
Nov 21 15:21:00 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, next signing at 2018-11-21T16:20:59
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, signing zone
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37015, algorithm ECDSAP256SHA256, KSK, public, active
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 39969, algorithm ECDSAP256SHA256, public
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37711, algorithm ECDSAP256SHA256, public, active
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, successfully signed
Nov 21 16:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, next signing at 2018-11-21T17:20:59
Nov 21 17:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, signing zone
Nov 21 17:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37015, algorithm ECDSAP256SHA256, KSK, public, active
Nov 21 17:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, key, tag 37711, algorithm ECDSAP256SHA256, public, active
Nov 21 17:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, successfully signed
Nov 21 17:20:59 daisy knotd[613]: info: [8.1.7.0.1.0.0.2.ip6.arpa.] DNSSEC, next signing at 2018-12-06T16:20:59
```
I noticed the zone as bogus around 18:00. Affected resolver had still the old DNSKEY RRSET without the new ZSK id 37711:
```
# dig 8.1.7.0.1.0.0.2.ip6.arpa dnskey +dnssec +multi @::1
…
;; ANSWER SECTION:
8.1.7.0.1.0.0.2.ip6.arpa. 28986 IN DNSKEY 256 3 13 (
V75YHZ3AzDGlxRHGK5VOhFlAlTmKNnW2r5ST0vqnujxp
Km2y+rLgDllr5CQArxeLvh+5bOud3OvI8Nb9hW35Eg==
) ; ZSK; alg = ECDSAP256SHA256; key id = 39969
8.1.7.0.1.0.0.2.ip6.arpa. 28986 IN DNSKEY 257 3 13 (
l5Q0Yim0B7LJYTveexWS68pKMZT7Ib9lW5IOWZuPMFmN
jFCgAWkAd7jpnkuQHw5joZOnAhF66drwCsBZB6e99A==
) ; KSK; alg = ECDSAP256SHA256; key id = 37015
8.1.7.0.1.0.0.2.ip6.arpa. 28986 IN RRSIG DNSKEY 13 10 86400 (
20181206152059 20181106135059 37015 8.1.7.0.1.0.0.2.ip6.arpa.
AULNjAs+AjDiZd4QOgF2tktZ6+Orglfxw33bHy3GCQP4
NlDJVmAU4yJQPuUbkDuydqE4AKDWddujgwyC6Nr2SA== )
```
Zone data were, however, signed by the new ZSK exclusively:
```
# dig 8.1.7.0.1.0.0.2.ip6.arpa soa +dnssec +multi @::1 +cdflag
…
;; ANSWER SECTION:
8.1.7.0.1.0.0.2.ip6.arpa. 54 IN SOA nsa.cesnet.cz. hostmaster.cesnet.cz. (
2018110704 ; serial
28800 ; refresh (8 hours)
7200 ; retry (2 hours)
1814400 ; expire (3 weeks)
900 ; minimum (15 minutes)
)
8.1.7.0.1.0.0.2.ip6.arpa. 54 IN RRSIG SOA 13 10 86400 (
20181221162059 20181121145059 37711 8.1.7.0.1.0.0.2.ip6.arpa.
i/60i6+BUAI8u+NB5X1DMjls9RZCA5XujOHfqWiQYu+D
/ivKTw2QhN3FO6bhL/VGfqmo03LPGhgrmTzzVD7bWw== )
```
Flushing the resolvers' caches works around the issue as well we using lower TTL for the zone. Our forward zones use TTL of 1h and such problem was not noticed with them.https://gitlab.nic.cz/knot/knot-dns/-/issues/616Parent DS checks should probe all configured servers even if some results are...2018-10-29T15:09:51+01:00Ondřej CaletkaParent DS checks should probe all configured servers even if some results are negativeDifferent DNS resolver implementations have different approach regarding TTL of the delegation as well as TTL of the DS record.
By actively probing various different implementations (BIND, Unbound, Knot Resolver, Google Public DNS, Quad9...Different DNS resolver implementations have different approach regarding TTL of the delegation as well as TTL of the DS record.
By actively probing various different implementations (BIND, Unbound, Knot Resolver, Google Public DNS, Quad9, Cloudflare), one can make sure the KSK is not rolled too early.
The current logic in Knot is however done in a way that if first probe returns negative result, the rest of servers is not queried. This leads to a situation where cache of the other servers is kept empty and when the first probe finally succeeds, the others will immediately pull fresh DS records from the authoritative servers, leading to immediate positive probe result.
It would be better to keep probing all configured servers, so cache of all of them is kept filled with the old DS record and all probes will succeed only when all caches got their content refreshed.2.7Libor PeltanLibor Peltanhttps://gitlab.nic.cz/knot/knot-dns/-/issues/609CDS/CDNSKEY records should be signed by KSK2018-09-20T23:05:15+02:00Ondřej CaletkaCDS/CDNSKEY records should be signed by KSKI've come across an interoperability issue between Knot DNS and `dnssec-cds` utility, part of BIND. This utility insists on `CDS/CDNSKEY` records signed by `KSK`; to be precise, `CDS/CDNSKEY` have to be signed by the same key to which cu...I've come across an interoperability issue between Knot DNS and `dnssec-cds` utility, part of BIND. This utility insists on `CDS/CDNSKEY` records signed by `KSK`; to be precise, `CDS/CDNSKEY` have to be signed by the same key to which current DS record points to. Therefore it fails to validate `CDS/CDNSKEY` records if zone is set up to use KSK and ZSK.
There is an old thread about this topic in [Knot-DNS users mailing list](https://lists.nic.cz/pipermail/knot-dns-users/2018-March/001344.html) by @stirnimann, but I didn't find any issue about this.https://gitlab.nic.cz/knot/knot-dns/-/issues/595Incoming IXFR with on-slave signing sometimes leads to memory corruption2018-07-27T13:11:40+02:00Ondřej CaletkaIncoming IXFR with on-slave signing sometimes leads to memory corruptionLet there be Knot 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52 (Debian upstream package) in this configuration of an on-slave signer, pulling zones from BIND9 on loopback port 53535:
```
server:
listen: 0.0.0.0@53
listen: ::@...Let there be Knot 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52 (Debian upstream package) in this configuration of an on-slave signer, pulling zones from BIND9 on loopback port 53535:
```
server:
listen: 0.0.0.0@53
listen: ::@53
log:
- target: syslog
any: info
remote:
- id: master
address: ::1@53535
acl:
- id: acl_slave
address: ::1
action: transfer
- id: acl_master
address: ::1
action: notify
policy:
- id: ecdsa_fast
zsk-lifetime: 1h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
nsec3: on
template:
- id: default
master: master
dnssec-signing: on
dnssec-policy: ecdsa_fast
acl: acl_slave
acl: acl_master
zone:
- domain: "example.com."
```
Configuration of BIND looks like this:
```
options {
directory "/var/cache/bind";
listen-on { none; };
listen-on-v6 port 53535 { ::1; };
ixfr-from-differences yes;
allow-transfer { localhost; };
also-notify { ::1; };
};
zone "example.com" { type master; file "example.com.zone"; };
```
Zone file `example.com.zone` looks like this:
```
$TTL 60
@ IN SOA ns hostmaster 20 120 60 3600 10
IN NS ns
ns IN A 192.0.2.0
test IN TXT "test"
test2 IN TXT "test2"
test3 IN TXT "test3"
```
Everything works like expected:
```
# knotc zone-status example.com
[example.com.] role: slave | serial: 21 | transaction: none | freeze: no | refresh: +1m22s | update: not scheduled | expiration: +59m22s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: +59m22s | NSEC3 resalt: +29D23h59m22s | parent DS query: not scheduled
```
After a while, or by issuing `knotc zone-sign example.com`, the zone gets resigned and its serial number gets higher.
But then, you delete a record from example.com zone file, say `test2` and increase serial of the unsigned zone to 21. After issuing `rndc reload example.com` bad things will start to happen:
```
named[19402]: received control channel command 'reload example.com'
named[19402]: zone example.com/IN: loaded serial 21
named[19402]: zone example.com/IN: sending notifies (serial 21)
knotd[19444]: info: [example.com.] notify, incoming, ::1@37079: received, serial 21
knotd[19444]: info: [example.com.] refresh, outgoing, ::1@53535: remote serial 21, zone is outdated
named[19402]: client ::1#59582 (example.com): transfer of 'example.com/IN': IXFR started (serial 20 -> 21)
knotd[19444]: info: [example.com.] IXFR, incoming, ::1@53535: starting
named[19402]: client ::1#59582 (example.com): transfer of 'example.com/IN': IXFR ended
knotd[19444]: info: [example.com.] IXFR, incoming, ::1@53535: finished, 0.00 seconds, 1 messages, 222 bytes
knotd[19444]: error: [example.com.] DNSSEC, failed to fix NSEC3 chain (no such record in zone found)
knotd[19444]: info: [example.com.] DNSSEC, next signing at 2018-07-23T11:19:30
knotd[19444]: info: [example.com.] refresh, outgoing, ::1@53535: zone updated, serial 22 -> 1929864568
knotd[19444]: warning: [example.com.] failed to update zone file (not enough space provided)
knotd[19444]: error: [example.com.] zone event 'journal flush' failed (not enough space provided)
```
At this moment the zone memory in the knot process probably gets corrupted. Server responds with `SERVFAIL` to any DNS query, zone status return some random numbers:
```
# knotc zone-status example.com
[example.com.] role: slave | serial: 1929864568 | transaction: none | freeze: no | refresh: +51Y10M3D12h54m45s | update: not scheduled | expiration: +57Y12M17h25m37s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: +50m43s | NSEC3 resalt: +29D23h50m43s | parent DS query: not scheduled
```
Finally, forcing zone reload with `knotc zone-reload example.com` leads to server crash due to an invalid pointer. I guess this is just an outcome of the memory corruption.
Please note that this issue is not 100% reproducible. During writing this issue report, I've seen a few cases where everything went smoothly. On the other hand, it reproduces fairly regularly not to be consider a random bug.
Disabling outgoing IXFR in the BIND process is a workaround to this issue.https://gitlab.nic.cz/knot/knot-dns/-/issues/590Add option to manually trigger key rollover with automatic key management2018-08-30T21:26:15+02:00Ondřej CaletkaAdd option to manually trigger key rollover with automatic key managementIt would be nice if there was a command to trigger immediate key rollover. For instance, when there is some manual work required to update parent DS record, operator may initiate the KSK rollover manually at their convenience, instead of...It would be nice if there was a command to trigger immediate key rollover. For instance, when there is some manual work required to update parent DS record, operator may initiate the KSK rollover manually at their convenience, instead of on regular intervals. Setting a KSK lifetime in the policy can lead to unnoticed rollovers, which would then get stuck in the middle for quite a long time and block the forthcoming ZSK rollovers.
I know it is possible to roll a key manually by generating a new key and retiring the old one using `keymgr` utility, but such approach is too dangerous for a casual operator and can lead to a bogus zone very easily. I would like to have a simple command like `knotc zone-keyroll <zone-id> <key-id>`.https://gitlab.nic.cz/knot/knot-dns/-/issues/589Cannot switch to a previously used ksk-shared dnssec policy2018-07-06T20:34:25+02:00Ondřej CaletkaCannot switch to a previously used ksk-shared dnssec policyWith Knot 2.6.7, let there be a zone with automatic key management policy, where the policy employs KSK sharing, like this:
```
policy:
- id: ecdsa_fast
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-dela...With Knot 2.6.7, let there be a zone with automatic key management policy, where the policy employs KSK sharing, like this:
```
policy:
- id: ecdsa_fast
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
ksk-submission: local_resolver
single-type-signing: off
- id: ecdsa_fast_single
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
single-type-signing: on
ksk-submission: local_resolver
zone:
- domain: "zone.66.acad.cz"
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
dnssec-signing: on
dnssec-policy: ecdsa_fast
acl: acl_slave
```
If zone is transitioned to a differrent policy by changing `dnssec-policy` option in zone definition and then back again to a previously used policy, transition back fails with following error in system log:
```
Jun 24 18:11:15 n66.clones.cesnet.cz knotd[11612]: info: configuration reloaded
Jun 24 18:11:15 n66.clones.cesnet.cz knotd[11612]: info: [zone.66.acad.cz.] DNSSEC, signing zone
Jun 24 18:11:15 n66.clones.cesnet.cz knotd[11612]: info: [zone.66.acad.cz.] DNSSEC, signing scheme rollover started
Jun 24 18:11:15 n66.clones.cesnet.cz knotd[11612]: error: [zone.66.acad.cz.] DNSSEC, failed to initialize (not exists)
Jun 24 18:11:15 n66.clones.cesnet.cz knotd[11612]: error: [zone.66.acad.cz.] zone event 'DNSSEC re-sign' failed (not exists)
```
Creating a new policy with different `id` or disabling `ksk-shared` option in the policy works around the issue.
As a related feature request, I believe there are some additional metadata in the keys LMDB, which are not visible to the `keymgr` utility. It would be nice to have a tool to inspect and/or fix those policy related parts of the database, maybe as a new feature of the `keymgr` utility.https://gitlab.nic.cz/knot/knot-dns/-/issues/588CSK deactivated too early when rolling to KSK+ZSK policy2018-06-28T15:29:46+02:00Ondřej CaletkaCSK deactivated too early when rolling to KSK+ZSK policyWith Knot 2.6.7, let there be a zone with a CSK, having a secure delegation:
```
# dig zone.66.acad.cz dnskey +dnssec +multi
; <<>> DiG 9.10.3-P4-Debian <<>> zone.66.acad.cz dnskey +dnssec +multi
;; global options: +cmd
;; Got answer:
...With Knot 2.6.7, let there be a zone with a CSK, having a secure delegation:
```
# dig zone.66.acad.cz dnskey +dnssec +multi
; <<>> DiG 9.10.3-P4-Debian <<>> zone.66.acad.cz dnskey +dnssec +multi
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16303
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;zone.66.acad.cz. IN DNSKEY
;; ANSWER SECTION:
zone.66.acad.cz. 60 IN DNSKEY 257 3 13 (
PMKxlJcyu+72MFU/7Bb+a9VI5fkSyJ/RITuzgYnCGC9e
3My96ThEsFtJQunWpSvpOI7X2GZ/xhts8N+6/xDjaQ==
) ; KSK; alg = ECDSAP256SHA256; key id = 50801
zone.66.acad.cz. 60 IN RRSIG DNSKEY 13 4 60 (
20180624161108 20180624124108 50801 zone.66.acad.cz.
0Aa/hizP73s/q6qiU/yKzAwM/LX+UjU+6bEm+gai7Pk6
Kth6l8A0graQYIMw4HD0czviFt1D9qRouG/iqmVpsA== )
# keymgr zone.66.acad.cz list
08cd5137185f4333e42b1c046cf29e684b771c86 ksk=yes zsk=yes tag=50801 algorithm=13 public-only=no created=1529849198 pre-active=0 publish=1529849198 ready=1529849208 active=1529849448 retire-active=0 retire=0 post-active=0 remove=0
# cat knot.conf
… irrelevant parts ommited …
policy:
- id: ecdsa_fast
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
ksk-submission: local_resolver
single-type-signing: off
- id: ecdsa_fast_single
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
single-type-signing: on
ksk-submission: local_resolver
zone:
- domain: "zone.66.acad.cz"
template: mastersign
dnssec-policy: ecdsa_fast_single
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
dnssec-signing: on
dnssec-policy: manual
acl: acl_slave
```
Let's suppose we want to migrate to ZSK + KSK signing. So we switch to a different policy, which only differs in `single-type-signing:` option. In system log, the change goes like this:
```
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: configuration reloaded
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing zone
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing scheme rollover started
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 50801, algorithm ECDSAP256SHA256, CSK, public, active
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 30240, algorithm ECDSAP256SHA256, KSK, public
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 4, algorithm ECDSAP256SHA256, public
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing started
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, successfully signed
Jun 24 16:15:01 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, next signing at 2018-06-24T16:15:11
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing zone
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 50801, algorithm ECDSAP256SHA256, CSK, public, active
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 30240, algorithm ECDSAP256SHA256, KSK, public, ready, active
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 4, algorithm ECDSAP256SHA256, public, active
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing started
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, successfully signed
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, next signing at 2018-06-24T16:15:21
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: notice: [zone.66.acad.cz.] DNSSEC, KSK submission, waiting for confirmation
Jun 24 16:15:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] parent DS check, outgoing, 2001:718::53@53: KSK submission attempt: negative
```
Please note that new the KSK has been submitted, but the DS record has not been updated yet…
```
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing zone
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 50801, algorithm ECDSAP256SHA256, CSK, public
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 30240, algorithm ECDSAP256SHA256, KSK, public, ready, active
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 4, algorithm ECDSAP256SHA256, public, active
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing started
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, successfully signed
Jun 24 16:15:21 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, next signing at 2018-06-24T16:15:31
```
At this moment, the zone becomes bogus because the DNSKEY RR is no longer signed by CSK with id=50801.
```
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing zone
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 30240, algorithm ECDSAP256SHA256, KSK, public, ready, active
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, key, tag 4, algorithm ECDSAP256SHA256, public, active
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, signing started
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, successfully signed
Jun 24 16:15:31 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] DNSSEC, next signing at 2018-06-24T17:15:11
```
Now, the CSK key is even cleared from the zone, even though it is still referenced by parent DS record. Even though the rollover is finished, the parent DS check keeps running:
```
Jun 24 16:17:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] parent DS check, outgoing, 2001:718::53@53: KSK submission attempt: negative
Jun 24 16:18:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] parent DS check, outgoing, 2001:718::53@53: KSK submission attempt: negative
Jun 24 16:19:11 n66.clones.cesnet.cz knotd[4246]: info: [zone.66.acad.cz.] parent DS check, outgoing, 2001:718::53@53: KSK submission attempt: negative
```
When rolling in the opposite direction – from ZSK+KSK to CSK – no issue is observed.nexthttps://gitlab.nic.cz/knot/knot-dns/-/issues/287nsupdate: SOA RRset prerequisite does not work2014-08-18T19:20:11+02:00Ondřej Caletkansupdate: SOA RRset prerequisite does not workLet there be a zone with contents:
```
example.com. 60 IN SOA n71.nebula.cesnet.cz. root.example.com. 20 120 10 3600 60
example.com. 60 NS n71.nebula.cesnet.cz.
```
When trying to do a...Let there be a zone with contents:
```
example.com. 60 IN SOA n71.nebula.cesnet.cz. root.example.com. 20 120 10 3600 60
example.com. 60 NS n71.nebula.cesnet.cz.
```
When trying to do a DDNS update using `nsupdate` utility from BIND, using the SOA rrset as a prerequisite does not work:
```
$ nsupdate
> server n71.nebula.cesnet.cz.
> prereq yxrrset example.com. IN SOA n71.nebula.cesnet.cz. root.example.com. 20 120 10 3600 60
> update add test.example.com. 60 IN TXT "TEST"
> show
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; PREREQUISITE SECTION:
example.com. 0 IN SOA n71.nebula.cesnet.cz. root.example.com. 20 120 10 3600 60
;; UPDATE SECTION:
test.example.com. 60 IN TXT "TEST"
> send
update failed: NOTZONE
```v1.5.1https://gitlab.nic.cz/knot/knot-dns/-/issues/275knsupdate: Origin not honored when deleting2014-08-18T19:20:11+02:00Ondřej Caletkaknsupdate: Origin not honored when deletingSee this `knsupdate` session (version 1.5.0):
```
# knsupdate
zone example.com
origin example.com
add test 60 TXT "testik"
send
del test
send
; Error: update failed: NOTZONE
del test.example.com.
send
```See this `knsupdate` session (version 1.5.0):
```
# knsupdate
zone example.com
origin example.com
add test 60 TXT "testik"
send
del test
send
; Error: update failed: NOTZONE
del test.example.com.
send
```v1.5.1Daniel SalzmanDaniel Salzmanhttps://gitlab.nic.cz/knot/knot-dns/-/issues/161TSIG not supported on NOTIFY messages2017-05-28T20:32:47+02:00Ondřej CaletkaTSIG not supported on NOTIFY messagesIt looks like there is no actual TSIG support for NOTIFY messages, both in master and in slave mode. Outgoing notify messages does not contain any TSIG RR. Incoming notify messages are accepted without TSIG even if there is a key configu...It looks like there is no actual TSIG support for NOTIFY messages, both in master and in slave mode. Outgoing notify messages does not contain any TSIG RR. Incoming notify messages are accepted without TSIG even if there is a key configured for the master server.
Although there is no real danger in not securing the NOTIFY messages, current behaviour of Knot leads to various interoperability issues with other DNS software like BIND or NSD3.https://gitlab.nic.cz/knot/knot-dns/-/issues/48storage directory should use localstatedir instead of sharedstatedir2018-06-13T15:42:05+02:00Ondřej Caletkastorage directory should use localstatedir instead of sharedstatedirAccording to [GNU Coding standards](http://www.gnu.org/prep/standards/html_node/Directory-Variables.html), `localstatedir` is a directory for installing data files which the programs modify while they run, and that pertain to one specifi...According to [GNU Coding standards](http://www.gnu.org/prep/standards/html_node/Directory-Variables.html), `localstatedir` is a directory for installing data files which the programs modify while they run, and that pertain to one specific machine. On the other hand, `sharedstatedir` directory imply the possibility of sharing the directory content between different machines.
As knot is probably not able to cope with storage data beeing overwritten by another instance on different machine, it would be better to user `localstatedir` instead.https://gitlab.nic.cz/knot/knot-dns/-/issues/46better knot.sample.conf2018-06-13T15:42:05+02:00Ondřej Caletkabetter knot.sample.confThe sample config file should be more usable out-of-the-box. No one (except developers) probably want to run knot listening on localhost port 53533. And running under root user by default is probably not very good idea either.
I propo...The sample config file should be more usable out-of-the-box. No one (except developers) probably want to run knot listening on localhost port 53533. And running under root user by default is probably not very good idea either.
I propose this change to sample config file:
--- a/samples/knot.sample.conf.in
+++ b/samples/knot.sample.conf.in
@@ -7,10 +7,12 @@
system {
identity "@package@ @version@";
+ user knot.knot;
}
interfaces {
- my-iface { address 127.0.0.1@53533; }
+ all_v4 { address 0.0.0.0@53; }
+ all_v6 { address [::]@53; }
}
zones {v1.3.0Daniel SalzmanDaniel Salzmanhttps://gitlab.nic.cz/knot/knot-dns/-/issues/34knotc - use TSIG key from config file2018-06-13T15:42:06+02:00Ondřej Caletkaknotc - use TSIG key from config fileWhen remote control is set up using inet sockets and TSIG protection, knotc needs to be invoked with explicit key definition. This is unnecessary as the key could be read from config file in the same way as the control socket address and...When remote control is set up using inet sockets and TSIG protection, knotc needs to be invoked with explicit key definition. This is unnecessary as the key could be read from config file in the same way as the control socket address and port are read.https://gitlab.nic.cz/knot/knot-dns/-/issues/33knot utilities should be installed to bindir2018-06-13T15:42:05+02:00Ondřej Caletkaknot utilities should be installed to bindirThere is no reason for installing 'kdig', 'khost' and 'knsupdate' into 'sbindir'. Similar utilities from BIND are installed into /usr/bin.There is no reason for installing 'kdig', 'khost' and 'knsupdate' into 'sbindir'. Similar utilities from BIND are installed into /usr/bin.v1.3.0-rc3https://gitlab.nic.cz/knot/knot-dns/-/issues/32configuration - remote control via unix socket should be enabled by default2018-06-13T15:42:05+02:00Ondřej Caletkaconfiguration - remote control via unix socket should be enabled by defaultWithout any configuration, knotc tries to connect to unix domain socket /var/run/knot/knot.sock. This socket is not created by knotd, unless config file states:
control {
listen-on "/var/run/knot/knot.sock";
}
This sh...Without any configuration, knotc tries to connect to unix domain socket /var/run/knot/knot.sock. This socket is not created by knotd, unless config file states:
control {
listen-on "/var/run/knot/knot.sock";
}
This should be default behavior to make knotd controllable out-of-the-box.v1.3.0-rc3