Nslookup switches

DNS breaking on Win11 randomly

2024.05.14 08:52 Syzodia DNS breaking on Win11 randomly

So this is a problem I have experienced on a new laptop running windows 11, and did not experience on my Windows 10 PC until I have upgraded it to Windows 11. This problem does not occur on my smartphones.
The problem is that, seemingly randomly, my devices (so the PC and laptop) will cease to resolve DNS queries on wi-fi. An example of how it looks from nslookup is below:
Server: UnKnown Address: *** UnKnown can't find google.com: No response from server192.168.20.1 
I usually first observe this from the web browser's DNS_PROBE_FINISHED_NO_INTERNET error. Note that I can still ping ip addresses directly (e.g. I can still ping 1.1.1.1 )
Possibly relevant software common to both devices:
The following measures I have tried and either did not solve the problem at all, or only solved the problem temporarily and were only effective once before system reboot:
The best solution I have at the moment is switching on to the Windscribe VPN, but this only works so long as I stay on the VPN. Turning the VPN off revisits the problem I have.
I cannot say if ethernet is a solution as I cannot figure out what triggers this problem, but also I need this problem squashed for wi-fi, especially when it comes to my laptop.
submitted by Syzodia to techsupport [link] [comments]


2024.05.12 15:21 Tigoror Squarespace sub domain behaviour when custom NS records are set for the root domain

I have a problem:
I have a root domain hosted on Squarespace, and this root domain uses Wix's NS records. Let's call it example.com from now on.
I created a subdomain: api.example.com, as well as a hosted zone in AWS Route53. Then, I added generated NS records to this api subdomain.
Now, it has been ~48 hours since I added them, but I can't trace anything for that subdomain. I am using commands like dig and nslookup.
There might also be a problem because in the Squarespace domain dashboard says: "Your DNS records are managed with your third-party nameserver provider. To activate the DNS records below, switch to Squarespace nameservers." This means that the default Squarespace records are not active, and I'm not sure if it also affects the Custom records section.
It's because I use Wix NS records, but unfortunately, I can't find information about if it affects somehow the behaviour of the Custom records section.
Question
When I use custom NS records for the root domain, does it affect subdomains created?
submitted by Tigoror to squarespace [link] [comments]


2024.05.12 15:17 Tigoror Route53 Hosted Zone Name Servers on Squarespace

Hello Everyone!
I have a problem:
I have a root domain hosted on Squarespace, and this root domain uses Wix's NS records. Let's call it example.com from now on.
I created a subdomain: api.example.com, as well as a hosted zone in Route53. Then, I added generated NS records to this api subdomain.
Now, it has been ~48 hours since I added them, but I can't trace anything for that subdomain. I am using commands like dig and nslookup.
There might also be a problem because in the Squarespace domain dashboard, it says: "Your DNS records are managed with your third-party nameserver provider. To activate the DNS records below, switch to Squarespace nameservers." This means that the default Squarespace records are not active, and I'm not sure if it also affects the Custom records section.
It's because I use Wix NS records, but unfortunately, I can't find information about if it affects somehow the behaviour of the Custom records section.
My desired state is:
To have a hosted zone on AWS that controls this subdomain that was created on Squarespace and has the ability to create new CNAME and A records for the nested subdomains like abc.api.*, something.api.*.
Is this even possible, or am I missing something?
Also, regarding Squarespace, when I use custom NS records for the root domain, does it affect subdomains created?
submitted by Tigoror to aws [link] [comments]


2024.05.01 01:49 TheCyberWarden Force Brave Safe Search - Windows DNS Server

TL;DR, there are some gaps in my DNS knowledge and I can't permanently force safe search on search engines using Windows DNS due to problems with subdomains.
Hello, I'm trying to configure the new Brave Safe Search enforcement mechanisms which the Brave team recently implemented for network admins to use (https://twitter.com/brave/status/1772747339024707756). I'm trying to set the appropriate DNS records on our internal DNS server (replicated on each AD domain controller running Microsoft Windows Server 2016.)
I've created a DNS zone for "search.brave.com". Since Windows didn't let me put a CNAME record for the zone (?), I've put in two A records instead, with the value of them being the IP addresses returned when resolving "forcesafe.search.brave.com" with nslookup. And that seems to work.
After that, I added domain delegation for "safe.search.brave.com" in order to delegate requests for that subdomain to Brave's name servers.
The only problem is that I'll have to add delegation for each and every subdomain of "search.brave.com" -- because apparently access to "cdn.search.brave.com" and "imgs.search.brave.com" were not returning valid DNS results (and every non-handled subdomain.) You can probably see where this is going. I could try manually adding delegation for every known subdomain, but I don't think that's the "right way" to do things.
Is there a catch-all when it comes to subdomains and domain delegation? Are there some other DNS records I should use? (I tried changing the SOA of the zone to be Brave's name server, but Windows keeps switching the SOA value back to the domain controller / local DNS server. I also tried looking into wildcard DNS records, and I wasn't able to get that to work either...). (I also tried messing around with NS records in the zone so that only Brave's name servers were in the NS records, but Windows kept adding the local DNS servers back as NS records.)
Ideally, I'd just like the DNS zone to fallback to use DNS recursion if there's not a specific record for a subdomain. Is there a reason my setup isn't currently falling back to the DNS forwarders for unknown subdomains?
I tried to do a similar thing with "google.com.mx" the other day -- pointing it to forcesafesearch.google.com, but requests for subdomains like "maps.google.com.mx" are being eaten.
Anyway, I'm not familiar with the nitty-gritty of the DNS specifications / RFCs, so maybe there's something I'm missing here. Thanks for your help!
submitted by TheCyberWarden to sysadmin [link] [comments]


2024.04.30 02:24 Capnlanky Pihole wont properly work unless ASUS router also resolving DNS?

Or does it? I'm by no means a networking expert but it seems that advertisements are getting through the pihole somehow. In my router DNS settings I have the static pihole IP listed as DNS Server 1. I also have unbound actively running. There is a router setting to "Advertise router's IP in addition to user-specified DNS". My understanding is that I would not want this setting on, thus having only the pihole resolve DNS.
If I turn it off, many services (but not all) stop working. If I try to access the a site through the browser Ill get a "DNS Probe Possible" error. When checking my pihole admin page, when I had the "advertise routers IP..." setting turned off, the pihole was getting more than 10x the queries it usually does.
I turned the "advertise routers IP..." back on and ran a nslookup The pihole is listed in the output as the DNS server resolving. The pihole still blocks more than 15% with the setting turned on, but I seem to be getting ads. I suspect the router DNS is resolving them.
Any insight would be greatly appreciated
Edit/update: Thanks to everyone who has helped out so far. The issue seems to be with unbound. By switching the pihole DNS back to a predefined away from unbound's 127.0.0.1#5337 I am able to shut off the router's "Advertise router IP" setting and keep connectivity.
Final Update: The issue here was the resolvconf.conf that Debian Bullseye+ auto installs. The official unbound documentation contained the steps on how to fix this and everything now runs as intended. Thanks to everyone who helped me identify and fix this issue
submitted by Capnlanky to pihole [link] [comments]


2024.04.28 12:14 Safderun67 Cloudflare Local IP Address Queries Fails on Linux Systems Only

Hey everyone. I just deployed a few applications on my Linux server that is in my home network. The IP address of the server is 192.168.1.200 . I have a domain name which I manage from cloudflare. I created a subdomain for home server which is home.domain.com. Then I also created a few subdomain for the different applications like nginxproxymanager.domain.com openmediavault.domain.com . The mapping is like:
domain.com A Record 3.172.180.12 (A Public IP that is not relevant with my home network, can be ignored)
home.domain.com A Record 192.168.1.200 openmediavault.domain.com CNAME Record home.domain.com nginxproxymanager.domain.com CNAME Record home.domain.com
All the DNS records works as expected when I query them from a MacOS. Also the online DNS query tools show that the domains are pointing to the correct local IP address.
But when I query the local ones from a Linux (tested from a Arch and Debian) computer, I get errors.
Case 1 (Failing): Arch dig home.domain.com (local) Query
dig home.domain.com ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.0.0.1#53: timed out ; <<>> DiG 9.18.25 <<>> home.domain.com ;; global options: +cmd ;; no servers could be reached 
Case 2 (Failing): Arch nslookup home.domain.com Query:
nslookup home.domain.com ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.1.1.1#53: timed out ;; communications error to 1.0.0.1#53: timed out ;; no servers could be reached 
Case 3 (Success): Arch dig domain.com Query:
dig domain.com ; <<>> DiG 9.18.25 <<>> domain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30263 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 0 ;; QUESTION SECTION: ;domain.com.INA ;; ANSWER SECTION: domain.com.300INA104.21.92.58 domain.com.300INA172.67.187.19 ;; Query time: 23 msec ;; SERVER: 1.1.1.1#53(1.1.1.1) (UDP) ;; WHEN: Sun Apr 28 12:54:50 +03 2024 ;; MSG SIZE rcvd: 74 
Case 4 (Failing): Debian dig home.domain.com Query:
dig home.domain.com ; <<>> DiG 9.16.48-Debian <<>> home.domain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 59755 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: d7205f5de8578457 (echoed) ;; QUESTION SECTION: ;home.domain.com.INA ;; Query time: 203 msec ;; SERVER: 192.168.1.1#53(192.168.1.1) ;; WHEN: Sun Apr 28 13:05:51 +03 2024 ;; MSG SIZE rcvd: 59 
Arch and Debian computers are different computers, hardwares. As you can see I get status: REFUSED if I query from the Debian server. I don't have any problem while connecting to 1.1.1.1 because I can query other public IP addresses. Also when I switch my DNS to another DNS provider like 8.8.8.8, I get the same results. The domain names that points to the local IP addresses works as expected on my MacOS system.
Is there a configuration that blocks local IP addresses on Linux environments?
submitted by Safderun67 to homelab [link] [comments]


2024.04.27 21:19 Cuillere_a_pot ERR_CONNECTION_TIMED_OUT on most but not all websites

I am facing a weird behavior from my wifi router : Everything was working perfectly fine until this morning. From seemingly nothing most websites started to give me the error code "ERR_CONNECTION_TIMED_OUT" on chrome or edge, no difference. The weird part is that some websites still work : for now I've seen that google.com, Bing and youtube still work fine, but everything else is displaying this error.
My Phone is on the same wifi network and everything works. I think it is coming from the router because someone else's phone is having the same issue.
The network is architectured like this : FTTH router->switch->wifi router.
the wifi router is a Sagemcom 5370e (branded Telia because I'm in Sweden) I tried restarting the thing, I tried using my cmd to ping the websites : 8ms response from youtube, but "answer from 192.168.0.1 : unable to reach the destination network" for twitter.com (for exemple)
The commands "nslookup" works in both cases, I get an answer from the DNS both for twitter and youtube.
That's all I can tell, I'm a bit lost right now and it is absolutely annoying to see that I'm only allowed to watch youtube on my computer and nothing else.
Oh and I can't even reach the router config page (entering 192.168.0.1 in my browser) But again : everything works perfectly from my phone ! I need your help :'(
submitted by Cuillere_a_pot to techsupport [link] [comments]


2024.04.15 22:34 QoreIT Some PCs can't resolve URLs in browsers until they reboot

For context, I own an MSP and, as such, this mystery has rolled up to me, but I'm at a loss.
Our customer's environment:
The problem:
Starting on April 8, our customer reported that their internet was down. It wasn't; their PCs were still online in our RMM and other tools. Our customer found that they could get affected PCs online after rebooting those PCs.
Further investigation revealed that when the problem occurred, only some PCs were affected while others were not.
Troubleshooting:
We power-cycled the network switch, but a few hours later, a few PCs couldn't reach the internet.
I remoted into one of the PCs while it was experiencing the problem and found:
To eliminate a DNS problem on the DC, I configured the affected PC's DNS settings to point to 8.8.8.8 and 8.8.4.4. The problem persisted; PC's web browser still couldn't resolve. Tried other browsers and found that they behaved just like the first browser - they could not resolve sites that the user had not yet tried to visit. I Returned local DNS settings to DHCP-derived.
After rebooting, the PC's browsers could visit sites that failed before we rebooted. The customer has reported that there are instances when the PCs can't resolve local resources too, but I've been unable to witness or reproduce that behavior.
None of our clients are experiencing this behavior, so I don't think the behavior is attributable to our stack.
Any clues?
Thanks in advance!!
Qore
submitted by QoreIT to sysadmin [link] [comments]


2024.04.14 17:49 grr79 DNS Help

Recently switched to ATT from Spectrum and took the opportunity to upgrade my original UDM to a USMSE. All settings copied across no problem. But I have one nagging issue. I used to have certain devices on my network (Apple TVs etc) using a smart DNS service, Something on the UDMSE is blocking this from working. Connected directly to the ATT router in IP Passthrough works no problem. Setting the DNS IPs in the UDMSE for the whole network works fine. Setting it manually on devices seems to get overridden by the UDMSE. On my Mac, with the DNS set manually a NSLOOKUP seems to show it is trying to use the correct IP on port 53. A DNS leak test just show that my UDMSE settings are taking preference. Anybody have any ideas?
submitted by grr79 to Ubiquiti [link] [comments]


2024.04.04 19:12 FLHRanger Double NAT Solution - ATT BGW210 - TPLink ER605 V2

Double NAT Solution - ATT BGW210 - TPLink ER605 V2
Hello Friends. I am a novice.
My setup is the following: ATT <- BGW210 <- ER605 <- Deco XE75 <- Deco XE75.
The BGW210 is the RG. ER605 is my firewall/router. First Deco provides Wi-Fi on one side of the house and the second Deco is wired to the first and covers the other side. I have me and my spouse's work computers connected to the ER605, and the TV in our bedroom is connected directly to the second Deco. Other than that, everything else is over Wi-Fi. I have a regular combined 5GHz and 2.4GHz SSID and an IoT at 2.4 GHz. Smart switches, irrigation and Ring cameras are on IoT. I also have a Home Pod and a Home Pod mini.
When I originally set this up, I put the BGW210 in IP Passthrough and made sure the WAN IP of the ER605 matched the BGW210. From forums I've read about this setup, it was recommended that the LAN IP of the ER605 be set to a different address than the BGW210, so for example the BGW210 is defaulted at 192.168.1.254, so the ER605 would be set to 192.168.2.xxx. This worked well until I started to learn a bit more about the setup.
I recently learned some CMD prompts that were interesting, like nslookup, pathping, tracert. So I did a tracert on my laptop connected over Wi-Fi and noticed the first 3 hops were private IP's. Looking into it more it was pretty obvious. Deco IP, then ER605 IP, then the BGW210 IP. I think... I was in a triple NAT situation.
I first realized I had the Deco system in router mode, so I changed the system to AP mode. Did tracert again but still had two private IP hops. I thought the BGW210 would "pass through" the WAN IP to the ER605 and I would not be in a double NAT. Maybe I wasn't? I have a limited understanding of this, if any at all.
I found this on ATT community forums. So I did the same. Made ER605 IP 192.168.1.254 and waited for everything on the network to catch up to the new config. No problems so far, and the tracert only has my 192.168.1.254 IP on the first hop.
However... I notice that after the first hop the second always times out. This did not happen when I was triple or double NAT'd (If that's even the right phrase). Also, if I run pathping, it always times out on the second hop from my gateway IP. It also shows a 69/100 lost/sent statistic in between my laptop and my gateway.
Was the instruction by ATTHelp correct in the forum? Should you really make the down stream router's IP the same as the RG's? Did I really solve the double NAT issue? Did I even have an issue?
If you've made it to the end of this thank you so much for reading and thanks in advance for your reply. I am finding all of this very interesting and am excited to learn more.
https://preview.redd.it/8b0v0i9ywhsc1.png?width=738&format=png&auto=webp&s=51e8715f942c178cb2b21515dcfb37a0b05ccbc8
submitted by FLHRanger to HomeNetworking [link] [comments]


2024.03.30 02:05 i_cant_take_it_anymo Hoping for some help with what I think is a DNS issue

tldr; - If an Asus home router is set to use Cloudflare DNS and "nslookup example.com 1.1.1.1" from a connected device returns an appropriate response, why would "nslookup example.com" return a "non-existent domain" message?
More Info:
I manage a small network for my HOA as a volunteer. We share a Comcast business line. The line comes into our club house, then into a pfSense box for firewall and DHCP (set to use Cloudflare DNS servers), then into a managed Netgear switch, then out to all the homes via in-ground cat5e. Each home has it's own router that the owners deal with. It's just one /22 network, no vlans, no blocking/filtering/traffic shaping. In the last 10 years the only problems we've had were either Comcast's or residents plugging the internet drop into a non-WAN port on their home routers.
This week I heard from several (but a minority of) residents that they're having frequent (but not always) problems accessing specific websites (nytimes.com, facebook.com, and others) as well as multiple streaming sites (Netflix, Paramount, Hulu). Two neighbors said that Netflix occasionally tells them that they're behind a VPN/proxy so they can't stream, but they're definitely not. We're not having any problems at our place, but I've got a static IP to the HOA router.
I got over to a neighbor's house today to do some testing. There were 3 specific sites she couldn't get to (a Chinese government site to apply for visas, msudenver.edu/catalog, library.auraria.edu), though there were probably others. I can reach them all fine from my home. Traceroutes looked fine, but nslookups all returned non-existent domain. I changed her router to use Cloudflare DNS and that fixed access to the 2 edu sites, but the Chinese site was still returning non-existent domain. An nslookup on the site with 1.1.1.1 at the end works, but a plain nslookup doesn't. Why aren't her devices using Cloudflare to resolve the site? I'm certainly no expert, and I'm completely stumped.
There's also the question of why they only recently started having DNS issues when I haven't touched the pfSense config, but I can let that go for now if setting their home routers to use Cloudflare fixes the problems.
submitted by i_cant_take_it_anymo to HomeNetworking [link] [comments]


2024.03.29 23:35 ComputerEngineer2014 EdgeRouter X DNS Stops Responding

Hi all,
I am running a mixed network with an EdgeRouter X (v2.0.9-hotfix.7) with some UniFi kit for wireless and switching.
At one of the sites, I regularly have DNS issues. I'll be browsing the internet and suddenly pages won't load with a dns error. Lately I've been checking and when this happens, I can still ping WAN IP addresses, but DNS resolutions (using NSLOOKUP) fail. I've also confirmed that if I specify the outside DNS server (8.8.8.8) my computer will resolve the IPs.
Typically after waiting a minute or two, DNS will start resolving and everything will be fine. Today this happened and it is not resolving.
I checked the DNSMASQ service using sudo /etc/init.d/dnsmasq status and it reports that DNSMasq is currently running. I am supposing that if I reboot the router it will come back, but I thought I'd leave it like this temporarily in case someone has a suggestion of what to check while it is actively not working.
For what it's worth, I recently switched ISPs, and the EdgeRouter did this on the old ISP as well. Originally, I attributed this to something weird with the ISP (formerly slow Frontier DSL was the best I could get), but I'm digging into this more now that I know it isn't ISP related.
Not sure if this is really relevant, but this system is a part of a two-site system where both locations have EdgeRouter Xes and are linked with a Wireguard VPN. The other (main) site doesn't have these issues. The only thing "nonstandard" to my knowledge is the config entry that forwards DNS entries across the VPN if they are part of that locations DNS zone. (m.mydomain.lan is the site with issues, w.mydomain.lan is the "main" site that doesn't have issues)
DNS config is as follows:
dns {
dynamic {
interface eth0 {
service custom-cloudflare {
REDACTED
}
}
}
forwarding {
cache-size 10000
listen-on switch0.32
listen-on switch0.34
listen-on switch0.35
listen-on switch0.36
listen-on wg0
name-server 8.8.8.8
name-server 8.8.4.4
options server=/w.mydomain.lan/192.168.2.1
}
}
submitted by ComputerEngineer2014 to Ubiquiti [link] [comments]


2024.03.28 05:49 jacintorigal Squarespace Domain with AWS Route 53

I am not very good with DNS/computer networking stuff so please go easy on me...
TL;DR: I believe I set up my Squarespace domain to be hosted on AWS Route 53 correctly but it does not work when I try to access it online (it just keeps trying to load the page but cannot, although my actual AWS website works via the AWS given link)
I have an AWS environment that has an application which is my personal website, a very simple django app. I bought a domain name on Squarespace (I'll refer to it as example.com), and i would like to be able to go to Google and type in example.com and have it load my website. So basically as I understand it my domain name is example.com, the domain registrar is Squarespace, and the domain will be hosted on AWS.
I understand it that you have to do domain hosting on AWS with Route 53. To start I made a hosted zone on AWS called example.com (the same name as the domain I bought on Squarespace), found the 4 nameservers from that, and added them custom nameservers in the Squarespace domain panel. No other changes were made (should i add DNS records there, or nameserver registration? The support agent said no but they didn't seem like an expert)
Next I went to the AWS and associated my elastic IP address with my environment. I then went to hosted zones and added in an A record: The record name is example.com (same as the domain I bought), Type: A, Routing policy: simple, Alias: no, and for Value I put the Elastic IP address that was associated with my environment from before
I waited several (4) days but still there is no result, so it is not an issue with DNS propagation or propagation within the AWS domain system. Furthermore, I tried the nslookup command in terminal on both the domain name example.com and my given elastic beanstalk url that I currently use to access the website (I'll refer to it as eb.elasticbeanstalk.com ), and they give different results for the non-authoritative answer Address; example.com gives the same IP address as my elastic IP but the eb.elasticbeanstalk.com gives something else, I assume it is one of the non elastic IP addresses that AWS says they switch between:
jack@10-17-22-222 ~ % nslookup example.com
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: example
Address: [my elastic IP]

jack@10-17-22-222 ~ % nslookup eb.elasticbeanstalk.com
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: eb.elasticbeanstalk.com
Address: [not my elastic IP]
Name: eb.elasticbeanstalk.com
Address: [also not my elastic IP but not the same as the one above]
Anyway I've been stuck on this problem for literal weeks and am running out of ideas. I read a lot of the documentation but there's so much AWS specific jargon that I find that I am misunderstanding a lot of it. If anyone has more experience in this matter and has any suggestions, or if you noticed a mistake in the way i configured things above, could you please let me know what to change?
Thanks in advance 🙏
submitted by jacintorigal to aws [link] [comments]


2024.03.24 19:17 Jsafah WiFi failing tracert and nslookup

I have UDM Pro SE, UniFi pro switch, and U6 Pro AP. I have the AP plugged into the switch and tagged the port for "Main-Staff" VLAN. When the port is tagged to this vlan, I can't get internet. Says DCHP timeout/fail on the UDM side. Tracert and nslookup fail as well.
However, if I put that port onto the default network (UDM's), it'll work.
Any help or suggestions?
submitted by Jsafah to Ubiquiti [link] [comments]


2024.03.19 03:50 MikeyMike_79 Firewalla outage post WAN outage (Gold+)

I know I have seen some posts similar to this issue, but posting about it again as my experience with support has been less than stellar.
High level topology overview:
Xfinity XB8 modem to Firewalla gold+ WAN port Eth4
Firewalla eth1 and 2 in LAG to TPLink TL-SG3210XHP-M2 (Active LACP LAG)
TPlink 10G connects to a Cisco 3850 using 10G DAC, a cheap managed switch on 2.5gbe, and another dumb switch on 10G SFP+. Access-points connect to TP-Link and other managed switch via 2.5gbe POE and use tagged VLANS for different SSID's with exception of one use for users.
Currently have multiple VLAN networks (vlan numbers are for example and not ones in use):
Users (untagged, LAN on firewalla)
Work(vlan 5), Management(vlan 6), IOT(vlan 7), guest(vlan 8) networks are all tagged vlan's.

Issue:
Over the last couple weeks, although I had a similar yet not the same issue at the end of 2023 as well, when my WAN link (Xfinity XB8 modem) reboots or goes down long enough for Firewalla to detect an issue when the internet recovers my local network on Firewalla breaks. In the latest occurrence my Xfinity modem rebooted and all DNS lookups to the internet failed on all vlans except the IOT VLAN which still worked for some reason. Local records added on the firewalla still worked although most went to devices in my management vlan which were not working as the management VLAN did not work at all. Could not ping IP's directly or access hosts at all. The only thing I could confirm working was connectivity to the internet via IP address (pinging 8.8.8.8, 1.1.1.1 and curling a couple known IP's that ended up redirecting to a DNS name that then failed) and Work and User networks and, while slow DNS lookups, the IOT network was working (TV's could connect to Disney+ for kids).
At this time since the app worked I got the SSH password and connected to the Firewalla and noticed a couple things. First, the "splash screen" when connecting showed /home usage as unknown. Next, the subinterface for bond0 for the Management VLAN was missing. NSLOOKUPs where deferring to IPv6 which was enabled and failing, but IPv4 lookups worked. In the app I then disabled IPv6 on the WAN, which allowed the default NSLOOKUP to use IPv4 and worked. At this time the Management vlan showed back up again on the box (checked via ip addr and ifconfig). Once the interface returned connectivity on the LAN to the Management VLAN started working again.
The next steps i took was going through and disabling every enabled feature on the Firewalla. First being monitoring overall, then the family protect/ad protect stuff, turned off unbound and any networks set for DoH. Disabled the VPN server and turned off any VPN client settings. Paused all routes that were set to use a VPN and made sure kill switch was off as well. After each step I tested connectivity and nothing resolved the overall connectivity issue to the internet.
My next step was to disconnect and reconnect the WAN link again and see if anything would resolve. This unfortunately led to the box rebooting as apparently the power connection on the Firewalla is extremely sensitive and bumping the barrel connector when unplugging the wan and reconnecting caused the firewalla to reboot. This is a whole other issue that I had reported middle of last year where I thought it was a faulty power brick and was told to check my power source multiple times before i gave up on it as I also installed it into a 3d printed rack mount and stopped bumping the wire when doing stuff near where it previously sat on a shelf.
That said after the reboot everything worked 100% again. I made sure to submit support tickets with box logs during the last 2 outages I had. The latest response I received from support was to try and create a new vlan during the issue and see if it has a problem too, but that feels very lacking in any true diagnostic level from a professional networking company. Especially when combined with every response to the issue has been in the same vain as turn off unbound, disable the LAG, is your switch working? While I understand some of these are valid checks they are also most irrelevant (at least wrt to the LAG) as internet access works via IP and VLAN traffic internal to the VLANs all worked and only had issues when going to the firewalla.
Also, I should note that the last outage I left running for over an hour before the accidental reboot in an attempt to let a support person remote connect during the outage, but unfortunately the ticket system was not fast enough. And no I don't necessarily expect a super fast response for something I don't pay a yearly support fee on, but its also not viable to leave the outage running indefinitely when both my wife and I work from home.


I am curious if anyone else has had any weird anomalies with Firewalla or can think of any other things to check the next time an outage happens, which certainly will as there is either a hardware issue on my unit or a bug in the code.



submitted by MikeyMike_79 to firewalla [link] [comments]


2024.03.19 03:39 mrpink57 [Bedrock] - Cloudflare sudbomain

So I am trying to figure this out.
Trying here: https://mcsrvstat.us I can reach the server just fine but as soon as I try the subdomain on a nintendo switch I just get I cannot connect to this world.
The server is a java docker server with geyser and floodgate, works with my old duckdns domain but I am trying to move the server to my cloudflare domain and use a specific domain, I setup a cloudflare ddns on my oracle vps and it shows the correct IP in my cloudflare dashboard, but stil nothing.
Any ideas how to troubleshoot this more? If I do a nslookup it gives the correct IP for the domain.
I also tried a srv record but understand that bedrock does not use srv records.
submitted by mrpink57 to admincraft [link] [comments]


2024.03.18 01:28 kooshipuff ONE computer not using the DNS server it gets from DHCP?

I've been bumping into a super weird problem. I think I have everything set up correctly, and local names are being resolved correctly on my (Android) phone and (Mint 20.3) desktop, but not my (Mint 21) laptop. It says the domains don't exist.
It's almost like that Patrick and Squidward meme: Hey computer, nslookup computer-core.local, which yields:
Address: 127.0.0.53#53 ** server can't find computer-core.local: SERVFAIL 
..Okay.. So! Tell me about your DNS config: resolvectl status, which yields:
Global Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported resolv.conf mode: stub Fallback DNS Servers: 10.0.0.1 Link 2 (enp9s0) Current Scopes: none Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Link 3 (wlp10s0) Current Scopes: DNS Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported Current DNS Server: 10.0.0.2 DNS Servers: 10.0.0.2 
..Okay.. That looks right. So do that: nslookup computer-core.local 10.0.0.2 ... which works??
I'm at a loss. I'll probably end up adding to the hosts file if I don't find a real solution, but that's so weird, right?
EDIT: Resolved! (Ha!) - I didn't realize .local was a special TLD. I switched it over to .lan in BIND and my kube's ingresses, and all my devices can get to stuff now.
Thanks!
submitted by kooshipuff to linuxquestions [link] [comments]


2024.03.11 23:52 Downtown_Data_6884 SSL certificate monitor detecting wrong certificate

I have set up an SSL certificate monitor to check the date validity of a public website we host. However the monitor is picking up the status of the wildcard certificate installed on the backend server, rather than the CN specific certificate presented to clients in browser at the frontend. I don't know how it is doing this; if I browse the site from the probe device I am presented with the correct certificate in the browser.
I have checked that there are no hosts file entries which might override the DNS settings, and I have confirmed that the probe device is resolving DNS correctly to the frontend (using NSlookup).
The setup is a windows based IIS server, hosting multiple websites on our domain, each site is a subdomain and so we have historically used a single wildcard certificate to cover all of them. Infront of the IIS server we have cloud security solution which provides WAF services.
Recently we switched to having the WAF provide automated deployment of lets encrypt certifcates. I wish to monitor the validity date of these certificates (having had a couple of incidents where they were not renewed in a timely fashion), but prtg is detecting and reporting on the validity of the serverside wildcard certificate.

submitted by Downtown_Data_6884 to prtg [link] [comments]


2024.03.10 20:43 SeaLife97 Weird DNS resolution

Weird DNS resolution
Lately, I've been experiencing slow DNS resolution, which has been a concern. Today, I attempted to address the issue by using nslookup, but the results were not quite what I anticipated.
> nslookup google.de Server: one.one.one.one Address: 1.1.1.1 Nicht autorisierende Antwort: Name: google.de.fritz.box Addresses: 2001:19f0:6c00:1b0e:5400:4ff:fecd:7828 45.76.93.104 > nslookup reddit.com Server: one.one.one.one Address: 1.1.1.1 Nicht autorisierende Antwort: Name: reddit.com.fritz.box Addresses: 2001:19f0:6c00:1b0e:5400:4ff:fecd:7828 45.76.93.104 
Each time I restart Google Chrome or any other application utilizing the internet, the initial DNS resolution can take up to 20 seconds, regardless of the number of tabs open. Once the resolution is completed initially, everything works smoothly until I restart the browser.
Considering that the nslookup response consistently includes the 'fritz.box' suffix for any request I make, I suspect the issue may be related to FritzBox.
I've already switched the DNS settings from "Use DNSv4 servers assigned by the Internet provider (recommended)" to "Use other DNSv4 servers," opting for 8.8.8.8 and 1.1.1.1. Additionally, I've tried locally changing the DNS server to 8.8.4.4 and others, but all servers yield identical results with nslookup.
Im running:
Edition Windows 10 Pro Version 22H2 Installiert am ‎ 24.‎01.‎2023 Betriebssystembuild 19045.4046 Leistung Windows Feature Experience Pack 1000.19053.1000.0 

Wifi Adapter Settings IPv4
submitted by SeaLife97 to fritzbox [link] [comments]


2024.03.02 08:33 excited4m Just started job search in USA - any feedback appreciated

submitted by excited4m to resumes [link] [comments]


2024.02.24 08:00 surjit1996 MacOS internet sharing is very unreliable, any alternative?

So I use mobile hotspot for internet connection on my macbook pro, and want to share internet connection from macbook pro to my unRAiD server via ehternet. I was using macos's internet sharing feature for the same but it rarely works. And when it works, it suddenly stopped working without any change: no server restarts or macos reboots or hotspot/wifi disconnects. It just stops working randomly. No internet connection on unraid server. I can dig and nslookup but cannot ping. So far I've tried a bunch of things, such as turning on and off wifi, deactivating and reactivating the shared ethernet network, deleting and re-adding the service in networks menu, deactivating and reactivating the internet sharing feature, restarting my mac and my unraid server. During all this.. sometimes I can open the unraid web-ui on browser but same no internet on unraid server, and other times I cannot open web-ui, cannot ping the server (i get host down), but the NAS works, finder is able to connect and mount the shared volume via smb. and sometimes even smb doesn't work.
Please suggest if I'm doing anything wrong, missing something, Or if there is any alternative 3rd party tool to achieve this such as virtual-switch or router for MacOS.
Thank you.
submitted by surjit1996 to MacOS [link] [comments]


2024.02.23 19:11 Material-Grade-491 How to resolve cache issue when using CloudFront

Hello,
I have an application running in an AWS BeanStalk environment, and now I have built a simple maintenance page using CloudFront with the static HTML files hosted in an S3 bucket. Because I didn't want to make the S3 bucket public, I used CloudFront to serve the HTML content.
I then created an IP-based routing in Route 53, and by default, both internal IP and default point to the BeanStalk endpoint. So both internal and external people can access the application.
For testing, I have updated Route53 for "internal" to the CloudFront endpoint, and with a page refresh, I see I have routed to the Maintenance page, which is as expected.
I then updated the Route 53 value for internal back to the BeanStalk environment and performed a page refresh, but I continued seeing the Maintenance page instead of the actual application. The test case failed.
Debug done: If I close the browser and start a new session, I can see the actual BeanStalk application. But my requirement is the user should not have to do anything.
I also debugged using the nslookup, resulting in a BeanStalk value, but the browser content is still from CloudFront.
I also added the to the maintenance HTML page so the user will automatically get the actual page when the maintenance is completed.
Overall, when switching from CloudFront to the actual application, the browser is showing the content from CloudFront even though the nslookup is resulting in the actual application.
Would you have any thoughts on addressing this?
Thanks.
submitted by Material-Grade-491 to aws [link] [comments]


http://swiebodzin.info