Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When ingress-dns is used with IPv6, the ping from host OS (Windows) fails #14872

Open
oldium opened this issue Aug 29, 2022 · 20 comments
Open

When ingress-dns is used with IPv6, the ping from host OS (Windows) fails #14872

oldium opened this issue Aug 29, 2022 · 20 comments

Comments

@oldium
Copy link
Contributor

oldium commented Aug 29, 2022

What Happened?

Tested on Windows 11. I followed steps to enable ingress-dns and forward test domain queries to minikube DNS from https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/

Then I tried to ping the configured host, but the ping failed:

#> ping frontend.test
Ping request could not find host frontend.test. Please check the name and try again.

nslookup succeeded, though:

> nslookup frontend.test 172.27.2.177
Server:  UnKnown
Address:  172.27.2.177

Non-authoritative answer:
Name:    frontend.test
Addresses:  172.27.2.177
          172.27.2.177

I checked this with Wireshark and the problem is that my Windows run on IPv6 primarily, so they queried the AAAA query first, but received A response. This is actually invalid and reason why DNS resolution failed.

Kindly please @sharifelgamal to comment/merge Pull Request sharifelgamal/minikube-ingress-dns#4 and update the image version to fix this issue.

Attach the log file

log.txt

Operating System

Windows

Driver

Hyper-V

@klaases
Copy link
Contributor

klaases commented Sep 9, 2022

Hi @oldium, nice work finding and debugging the issue, have you been able to get your PR tested?

@oldium
Copy link
Contributor Author

oldium commented Sep 11, 2022

@klaases I made few changes to the pull request and re-tested it thoroughly on Windows with Acrylic DNS Proxy (in the forwarding proxy mode - no caching). I set on both ingress DNS minikube PODs DNS_NODATA_DELAY_MS="20" environment variable to delay the NoData responses - in order to allow querying of two DNS servers in two independent minikube profiles - see logs from Acrylic proxy below. Request 00002 (type A - IPv4) was responded almost immediately by one server and second server had delay of 20ms. Request 00003 (type AAAA - IPv6) has no responses and was delayed by both servers by 20ms.

2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002 received from client 127.0.0.1:56863 [OC=0;RD=1;QDC=1;Q[1]=frontend.home.arpa;T[1]=A;Z=0002010000010000000000000866726F6E74656E6404686F6D6504617270610000010001].
2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002>50074 forwarded to server 1.
2022-09-11 20:22:12.027 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00002>50074 forwarded to server 2.
2022-09-11 20:22:12.049 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 2 in 10.0 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=1;NSC=0;ARC=0;Q[1]=frontend.home.arpa;T[1]=A;A[1]=frontend.home.arpa>171.19.19.172;Z=C39A818000010001000000000866726F6E74656E6404686F6D65046172706100000100010866726F6E74656E6404686F6D65046172706100000100010000012C0004AC1313AB].
2022-09-11 20:22:12.049 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074>00002 received from server 2 sent to client 127.0.0.1:56863.
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003 received from client 127.0.0.1:56866 [OC=0;RD=1;QDC=1;Q[1]=frontend.home.arpa;T[1]=AAAA;Z=0003010000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003>39906 forwarded to server 1.
2022-09-11 20:22:12.051 [I] TDnsResolver.HandleDnsRequestForIPv4Udp: Request ID 00003>39906 forwarded to server 2.
2022-09-11 20:22:12.058 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 1 in 26.4 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=C39A818000010000000000000866726F6E74656E6404686F6D6504617270610000010001].
2022-09-11 20:22:12.058 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 50074 received from server 1 discarded.
2022-09-11 20:22:12.073 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 2 in 25.4 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=9BE2818000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906>00003 received from server 2 sent to client 127.0.0.1:56866.
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 1 in 25.8 msecs [OC=0;RC=0;TC=0;RD=1;RA=1;AA=0;QDC=1;ANC=0;NSC=0;ARC=0;Z=9BE2818000010000000000000866726F6E74656E6404686F6D65046172706100001C0001].
2022-09-11 20:22:12.078 [I] TDnsResolver.HandleDnsResponseForIPv4Udp: Response ID 39906 received from server 1 discarded.

I also tested this without setting DNS_NODATA_DELAY_MS (default configuration) and this works as expected too - responses are sent immediately. This should be the default use case for the majority of users.

Maybe it would be good to document this somewhere - I added new environment variable DNS_NODATA_DELAY_MS, which delays NoData responses (the responses without any IP address). This allows querying two servers by the DNS proxy (Acrylic on Windows, or dnsmasq on Linux), where the fastest response is accepted and forwarded to the requesting client while all later responses are discarded.

@oldium
Copy link
Contributor Author

oldium commented Sep 12, 2022

Also it would be good to make DNS_NODATA_DELAY_MS configurable somehow, it is not possible to add nor modify environment variables for PODs.

@oldium
Copy link
Contributor Author

oldium commented Sep 30, 2022

Updated the minikube-ingress-dns PullRequest to use ConfigMap and added some description. I hope this could be used somehow in minikube as well. If not, please suggest better solution. Thanks.

@oldium
Copy link
Contributor Author

oldium commented Sep 30, 2022

Added file watcher, so that the new value from ConfigMap is automatically applied (this takes few seconds - the ConfigMap change is not visible immediately). I think this is a generic solution and can be applied.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2022
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 29, 2023
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 30, 2023
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 2, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@9SMTM6
Copy link

9SMTM6 commented Mar 11, 2024

Since my mention in the solution PR did not show up, I did want to record the current state on here:

The PR has been in its current code state since April last year.

Original statement by owner (before some improvements) was that it looked okay, but needed some testing.

I've since tested it to the best of my ability, with suggestions from @oldium.

That was concluded in the beginning of February, and I documented my testing steps so people could just follow it if they want to test too.

Other than the original communication there was no response from owner.

ingress-dns has also not been updated since 3 years ago. When I looked at dependencies, the first I found, node 12 container, is EOL since 2022. The PR from @oldium did include updates to dependencies. I realize that ingress-dns isn't really THAT security relevant, since its not meant to be used in production, but still that feels kind of wrong.

Perhaps another maintainer has time to look at this? Or at least freeze this issue, so it doesn't keep getting marked stale with all the connected comment bloat.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 9, 2024
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2024
@oldium

This comment was marked as duplicate.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 7, 2024
@oldium
Copy link
Contributor Author

oldium commented Dec 7, 2024

Everything is prepared, somebody just needs to merge the change.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants