Firewall prox

Currently a Jr. Sysadmin/ General IT Hybrid. Would Like to Get Into Logistics/ Business Analyst

2024.05.06 21:01 suffuffaffiss Currently a Jr. Sysadmin/ General IT Hybrid. Would Like to Get Into Logistics/ Business Analyst

submitted by suffuffaffiss to resumes [link] [comments]


2024.05.04 04:31 According_Race7236 Which way to go...

Hi Guys,
Been lurking around reddit for a while and I want to jump into a more secure/advanced home network but don't know which road to go down.
Currently i'm just running the ISP provided router in the house which is pretty basic. I want to step it up as I have built out a homelab to play with ProxMox, docker etc and also build out an Automated Media Server, I am also installing ESXI onto another MiniPC to play around with for work and tinker with vCentre 8, as i'm not far off sitting the exam and most of my experience so far has been upgrading/migrating clients from 6.7 to 7.1 etc.
End goal is ultimately to build out a good network and learn some more advanced skills along the way which will help me understand the networking side of things. I understand the best way to stage the network is to setup 3 VLANs for Guest/IOT/Trusted Devices. Currently i'm leaning towards buying another mini PC such as a HP T703 and drop in a network card and install PFsense/OPNsense, as I could tinker with that for a while before cutting it over assuming i could go from the ISP Router into the WAN and set up some of the devices i only use and really play with some firewall settings and see what i can break and fix etc. I would also then need to budget for a better AP. I understand I could VM PfSense from the homelab, however I don't have room for a Network card with a GPU for transcoding already in place.
Otherwise I was thinking about the Omada Wireless Router as that can achieve most of this but less to learn.
Been going back and forth for a few days.
submitted by According_Race7236 to HomeNetworking [link] [comments]


2024.05.02 17:25 OrganicCPU (IT) Looking for recommendations and tips

Hello all, I have recently updated my resume and am looking for any tips or advice on either what may need to be reworded or added
I am currently looking to further my career out of helpdesk and specialize in networking. I understand that as I apply to jobs I will need to tailor my resume to them specifically. But how is this as a starting template?
TIA
submitted by OrganicCPU to resumes [link] [comments]


2024.04.29 23:49 peterkrip Java controller shows APs as offline

I currently don't own a Cloud Key Controller and I don't run a UniFi controller continuously in any other manner. I originally just ran the UniFi Java app (ver 7.0.25) on my Mac laptop. Then, I think, at some point I ran the app in a container in ProxMox but subsequently I decommissioned my ProxMox machine.
Right now, I can run my Java MacOS app. I can login to it's web site via my SSO credentials and I see my 2 access points (U6-Lite & U6-Pro) but their status is showing offline, although both of them work fine.
In the "Network Device SSH Authentication" I see that SSH is enabled and I see the credentials for logging into the APs via SSH. I am able to do that. I can run "info" and "set-inform http://X.X.X.X:8080/inform" but this doesn't help.
I checked: firewall is off on the laptop.
Trying to curl from within the AP is flaky. Sometimes the connection gets reset, sometimes, the response is empty:
U6-Lite-BZ.6.0.15# curl -v http://192.168.192.31:8080/inform
> GET /inform HTTP/1.1
> Host: 192.168.192.31:8080
> User-Agent: curl/7.71.1
> Accept: */*
>
* Recv failure: Connection reset by peer
curl: (56) Recv failure: Connection reset by peer
U6-Lite-BZ.6.0.15# curl -v http://192.168.192.31:8080/inform
> GET /inform HTTP/1.1
> Host: 192.168.192.31:8080
> User-Agent: curl/7.71.1
> Accept: */*
>
* Empty reply from server
curl: (52) Empty reply from server
Clearly the HTTP service is responding sometimes.
Any ideas what's going on?
I'm generally happy to dive into such technicalities but this is frustrating.
Am I better off running Unifi Controller in a small Raspberry Pi box? If I decided to do that , would I need to go through some sort of migration process for the controller?
If I decide to use a Raspberry Pi, how much storage would be needed for logs and such... Right now, I happen to have an old Raspberry Pi 3 box and two MicroSD cards lying around: an 8G and a 32G.
Or should I buy a used cloud key of eBay?
Or should I be thinking of buying something more hardware from ui.com? I'm currently running pfSense and a Cisco switch, since 3-4 years ago people were happy with Ubiquiti APs but not with their edge routers/firewall.
submitted by peterkrip to Ubiquiti [link] [comments]


2024.04.16 00:13 shinju New to ProxMox - Issues DL380 G9 Random Internal-Error crashes

Installed Proxmox on a new server, and any vm I migrate or try to set up from scratch crashes with:

Apr 15 17:03:15 saturn pveproxy[1827]: worker 14931 finished Apr 15 17:03:15 saturn pveproxy[1827]: starting 1 worker(s) Apr 15 17:03:15 saturn pveproxy[1827]: worker 19694 started Apr 15 17:03:19 saturn pveproxy[19693]: got inotify poll request in wrong process - disabling inotify Apr 15 17:04:44 saturn pveproxy[1827]: worker 16246 finished Apr 15 17:04:44 saturn pveproxy[1827]: starting 1 worker(s) Apr 15 17:04:44 saturn pveproxy[1827]: worker 19947 started Apr 15 17:04:45 saturn pveproxy[19946]: worker exit Apr 15 17:04:46 saturn QEMU[18636]: KVM: entry failed, hardware error 0x0 Apr 15 17:04:46 saturn kernel: kvm_intel: set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state. Apr 15 17:04:46 saturn QEMU[18636]: RAX=ffffffffa21cbc60 RBX=ffffffffa321b440 RCX=0000000000000001 RDX=0000000000030ecb Apr 15 17:04:46 saturn QEMU[18636]: RSI=0000000000000083 RDI=0000000000030ecc RBP=ffffffffa3203e10 RSP=ffffffffa3203e08 Apr 15 17:04:46 saturn QEMU[18636]: R8 =ffff94523bc22d20 R9 =0000000000000000 R10=0000000000000001 R11=0000000000000000 Apr 15 17:04:46 saturn QEMU[18636]: R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000 Apr 15 17:04:46 saturn QEMU[18636]: RIP=ffffffffa21cbd6b RFL=00000206 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 Apr 15 17:04:46 saturn QEMU[18636]: ES =0000 0000000000000000 ffffffff 00c00000 Apr 15 17:04:46 saturn QEMU[18636]: CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] Apr 15 17:04:46 saturn QEMU[18636]: SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] Apr 15 17:04:46 saturn QEMU[18636]: DS =0000 0000000000000000 ffffffff 00c00000 Apr 15 17:04:46 saturn QEMU[18636]: FS =0000 0000000000000000 ffffffff 00c00000 Apr 15 17:04:46 saturn QEMU[18636]: GS =0000 ffff94523bc00000 ffffffff 00c00000 Apr 15 17:04:46 saturn QEMU[18636]: LDT=0000 0000000000000000 ffffffff 00c00000 Apr 15 17:04:46 saturn QEMU[18636]: TR =0040 fffffe0800e59000 00004087 00008b00 DPL=0 TSS64-busy Apr 15 17:04:46 saturn QEMU[18636]: GDT= fffffe0800e57000 0000007f Apr 15 17:04:46 saturn QEMU[18636]: IDT= fffffe0000000000 00000fff Apr 15 17:04:46 saturn QEMU[18636]: CR0=80050033 CR2=0000559fd2ba7fb8 CR3=000000011979c000 CR4=000006f0 Apr 15 17:04:46 saturn QEMU[18636]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 Apr 15 17:04:46 saturn QEMU[18636]: DR6=00000000fffe0ff0 DR7=0000000000000400 Apr 15 17:04:46 saturn QEMU[18636]: EFER=0000000000000d01 Apr 15 17:04:46 saturn QEMU[18636]: Code=a3 e8 43 8e 8c ff eb ca cc eb 07 0f 00 2d d9 23 44 00 fb f4 cc cc cc cc eb 07 0f 00 2d c9 23 44 00 f4 c3 cc cc cc cc cc 0f 1f 44 00 00 55 48 89 e5 Apr 15 17:05:28 saturn pveproxy[1827]: worker 15373 finished Apr 15 17:05:28 saturn pveproxy[1827]: starting 1 worker(s) Apr 15 17:05:28 saturn pveproxy[1827]: worker 20064 started Apr 15 17:05:29 saturn pvedaemon[1820]: root@pam end task UPID:saturn:00004A42:00077BB1:661DA3AE:vncproxy:112:root@pam: OK Apr 15 17:05:29 saturn pveproxy[19693]: worker exit Apr 15 17:05:30 saturn pveproxy[20063]: worker exit


I've ran tests on the hardware, I've disabled IOMMU, I've beat my head against a wall just in case it would help (it didn't), and cant' figure this out. Sometimes a VM will run for a few hours, sometimes it's just minutes. I can't figure out any rhyme or reason to this. The above log was on a quick vm creation:
agent: 1 boot: order=scsi0;ide2;net0 cores: 4 cpu: x86-64-v2-AES,flags=+aes ide2: ProxMox2:iso/ubuntu-22.04.4-live-server-amd64.iso,media=cdrom,size=2055086K memory: 4096 meta: creation-qemu=8.1.5,ctime=1713218436 name: europa net0: virtio=BC:24:11:98:0F:61,bridge=vmbr0,firewall=1 numa: 0 ostype: l26 scsi0: ProxMox1:112/vm-112-disk-0.qcow2,iothread=1,size=32G scsihw: virtio-scsi-single smbios1: uuid=2284469d-1d41-41d3-baac-2fdcdd24f4ca sockets: 2 vmgenid: 8b4eb69c-c275-4219-b230-f372c81615ac 

The host is running
CPU(s) 
72 x Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (2 Sockets)
Kernel Version 
Linux 6.8.4-2-pve (2024-04-10T17:36Z)
Boot Mode 
EFI
Manager Version 
pve-manage8.1.10/4b06efb5db453f29

Any help is appreciated.
submitted by shinju to Proxmox [link] [comments]


2024.04.12 11:40 sysbitnet Can pfsense detect requests and routing to set hostname

Can pfsense detect requests and routing to set hostname
Hello,
In the picture, i have one lab test whereas as you can see i have one ProxMox.
Inside have 3 Virtual Machines, one of those three is pfSense, and the other two are independent web servers with different content but need to have the same configuration with ports.
How can easily manage inside Firewall / Aliases / IP
Firewall / Aliases / IP
And manage the port inside the Firewall / Aliases / Ports
Firewall / Aliases / Ports
And how can routing inside Firewall / NAT / Port Forward do like this
Firewall / NAT / Port Forward
My idea is when i am in the browser enter web1.domain.com to get content from VM 1, but when i enter browser web2.domain.com to get content from VM 2
The point is not to set inside URLs like web1.domain.com:CustomPort or web2.domain.com:CustomPort how can get content.
Did anyone have experience with this case?
Many thanks in advance :)
submitted by sysbitnet to PFSENSE [link] [comments]


2024.04.11 22:13 04_996_C2 Pulling my hair out: VLAN Traffic over Bridge attached to a Bond

Here's my ProxMox Network Setup:
NIC1 -> vmbr0 -> IP: 192.168.128.5/24 GW: 192.168.128.1 (for management) VLAN NOT Aware Bond0 (NIC2 + NIC3) -> vmbr1 -> VLAN Aware Search Domain: foo.bar DNS Server 1: 192.168.128.1 
Containers 102 - 111 have the following general Network Configs
Name: eth0 Bridge: vmbr1 Firewall: Yes VLAN Tag: 8 MAC Address: Auto Ip Address: 192.168.8.(2-11)/28 GW: 192.168.8.1 All containers have the following DNS settings: DNS domain: use host settings DNS server: use host settings 
The ProxMox server has two switches between itself and the Firewall/Router. Each port to which the PVE transverses is set to Trunk or Access config's to pass packets tagged with VLAN 8.
The problem is this: despite having the exact same network configs (except IP addresses and Mac Addresses) some containers have connectivity, some don't.
Any thoughts on what could be occuring?
submitted by 04_996_C2 to Proxmox [link] [comments]


2024.04.05 01:06 GCUArmchairTraveller Zigbee, Zwave, Wifi, Bluetooth, 433Mhz/915Mhz sensors attached to HA. Plus Coral

Zigbee, Zwave, Wifi, Bluetooth, 433Mhz/915Mhz sensors attached to HA. Plus Coral
Basically subj. After having Dell 3020 with i5 CPU 16GB of RAM bought on ebay for $45 sitting on shelf for >6 month this weekend I decided to put it into action.
I also had two 250GB OCZ SLC SATA that I got >6 years ago and basically were used as USB drives. Put one of them inside, burned Proxmox 8.1 on USB stick, booted and installed on drive and followed this guide to install it. Everything went fine. HAOS was installed with 2 cores and 4GB (host has 4 cores and 16 GB) and was/is booting with the lightning speed.
Then things went one by one. The system has single M2 slot that was taken with Intel wifi+bluetooth adapter. I never used wifi during this setup, only wired connection. Did pass-thru of Bluetooth adapter - HAOS detected it as bluetooth 4.0 without a problem and then detected two ThermoPro TP357 sensors in two rooms.
Then attached Sonoff Zigbee dongle via USB extension cable, installed Mosquitto broker, Z2M, started added Zigbee devices: 5 leak detectors across bathrooms and utility rooms, two LYWSD03MMC converted from Bluetooth to Zigbee went inside fridge and freezer, two Tuya wall switchs with energy monitoring, several Tuya temp/humidity sensors that are working from 2xAAA batteries, a motion sensor next to entrance, and finally two Keen Home smart vents (they have been here when we bought the house). All went fine.
Next, installed HACS, then downloaded Sonoff LAN and installed 4 S31 switches with energy monitoring. (Now, my intention is to have everything local only, without cloud but my understanding is that to replace firmware on S31 requires disassambling them and soldering them - I would rather avoid doing it. If there is a way to replace firmware without soldering, please share how)
Next step was Zwave - Sclarge BE469 door lock (again, house came with it). I ordered Zooz 800 series stick that arrive quickly. Now this so far was the most challenging - I spent two days trying to enroll the lock without success. Fortunatelly one kind soul on HA forum guided me exactly how this needed to be done and I managed to complete this task too.
Then, next task - SD-RTL and RTL433 to get data from several ThermoPro TX2 sensors and Ambient WS-2902A weather station (again, house came with it). Now, I tried first to use ambien app so it can send data via mqtt, but while AmbientWeather2MQTT installation was successful, it did not get any data to HA, so I defaulted to get data directly from the station, not from the console.
Enabled Nest integration for Nest E (should I mention that the house came with it?).
Finally, I attached Coral at USB 3 port, but I have not configured it yet - I don't know what is best - have Frigate or something else running outside of HA as a separate VM/container or have it inside HA - you opinions are welcome on this matter.
Couple of things remaining - Telegram integration for notifications and hook up couple of Waze cameras after replacing factory firmware with third-party.
CPU utilization hits 50-60% with 2 cores when booting/shutdown and is around 3% when idle. RAM utilization (from 4GB) does not exceed 40%. Power consumption of the box is about 15-20w.
Screenshot of the config and the box:
https://preview.redd.it/w6upqy1dnjsc1.png?width=1060&format=png&auto=webp&s=aa91af8716b815be2a9aaec201b86499f4f7616c


https://preview.redd.it/7mwzwqiknjsc1.png?width=903&format=png&auto=webp&s=200f4ecc7c73f92404e3080afa63eed59553f19b
submitted by GCUArmchairTraveller to homeassistant [link] [comments]


2024.04.04 18:12 nodesprovider How to Run A Monero XMR Full Node? A Complete and Easy Guide

How to Run A Monero XMR Full Node? A Complete and Easy Guide
https://preview.redd.it/oaug442tlhsc1.png?width=2048&format=png&auto=webp&s=799f2ca3f19b74ce56dc054467052a3c8be382f5

The Role of Nodes In Monero Network

A Monero (XMR) node is a peer-to-peer (P2P) program that keeps the blockchain network synchronized with the network at large.
Nodes are the backbone of the Monero network. You can think of a Monero node as a device on the internet that runs the Monero software. This features a full copy of the Monero blockchain and is actively assisting the overall Monero Network.
Nodes are intended to participate within the Monero network and to secure the transactions through the procedure of enforcing the rules within the network.
They can download the entire blockchain to know what transactions have taken place. Furthermore, they can even contribute to the network by participating in the procedure of creating blocks, otherwise known as mining.
This means that through the utilization of a Monero node, you can get access to block height, transaction status, wallet balances, and other comprehensive and sophisticated data.

Monero Full Node vs Remote Node

A node that is not running on a local machine is known as a remote node, and remote nodes can be private if they are only available for personal use or open if they are accessible by other people. A remote node runs on a distinct machine different from the one where the Monero wallet is located.
So the main differences between Monero’s full and remote nodes can be translated as:
  • A Monero full node stores the entire blockchain data and verifies all transactions independently. Running a full node provides you with the highest level of privacy and security when transacting in XMR.
  • A Monero remote node, on the other hand, allows users to interact with the Monero network without downloading and storing the full copy of blockchain data on their devices.
Additionally, it is useful to realize the key differences between public and private Monero remote nodes to have a better understanding of how the Monero network works.
  • Monero private remote nodes: A private remote node operates on a separate machine, such as a VPS or dedicated server, rather than your own computer. Despite its remote location, you maintain complete authority over its functions. This setup gives you the flexibility to transform it into an open node if you choose, allowing others to access it.
  • Monero open remote nodes: Open remote nodes are a perfect solution for people who, for various reasons such as hardware limitations, lack of disk space, or technical knowledge, decide to stay away from deploying their own node. Instead, they rely on these publicly accessible nodes within the Monero network.
Open remote nodes can be provided by the individuals sharing their node with the community or by node provider companies like NOWNodes. Open nodes are amazing because, as was stated before, allow people who are not running their own node to have instant access to the Monero network. However, better to be aware that using free public remote nodes might be risky!

How to Connect to the Monero Remote Node with NOWNodes

Joining the NOWNodes party is a great opportunity to join XMR blockchain or to lighten your crypto infrastructure. 😎
NOWNodes is a trustworthy blockchain-as-a-service solution that lets users get access to full Nodes and blockbook Explorers via API. Thousands of developers have already enhanced their crypto projects with NOWNodes, now it’s your turn!
We have Service Quality Standards available for all partners. By utilizing NOWNodes for your Monero development needs you can be assured that Your nodes are under 24/7 surveillance. They are constantly being updated according to every major upgrade coming to the blockchain networks. That way, our clients can enjoy the scalability for any large tasks, and blazing-fast API responses that equal tiny parts of the seconds.
Your connection to the Monero Remote Node was made as easy and secure as possible by NOWNodes. Here are the simple steps on how you can access it:
  • Visit the NOWNodes website (nownodes.io) and sign up using only your email address and a password.
  • Choose a tariff plan that fits your web3 development needs. There are various tariff plan options including a START FREE plan.
    • If you’re willing to start with a free plan, make sure to add the XMR to the list of 5 blockchain networks available on this plan.
    • If you are ready to go on with the PRO plan or higher, you can access any node out of 100 nodes that NOWNodes has to offer, including some additional features such as WSS, Tendermint, and Webhooks.
  • On the “DASHBOARD” page find the “ADD API KEY” button and generate your API key.
  • Finally, when the registration process is complete, it’s time to make some requests!
  • Use the provided Monero endpoint xmr.nownodes.io and your API key to interact with the blockchain.

How to Set Up a Monero Full Node

If you’re a man who always chooses the hardest path, here’s the walkthrough for you. The step-by-step guide on how to run a Full Node is presented below. So, let’s start by releasing the requirements.

Hardware Requirements:

  • Decent amount of storage: At least 100 GB+ of free disk space
    • Better to have more as it will grow over time (might even take TBs)
  • Memory (RAM): At least 4 GB of RAM is recommended.
  • A stable internet connection with at least 100Mbps bandwidth.
    • Monthly bandwidth use can vary from about one hundred gigabytes per month to several terabytes.
  • Processor (CPU): A modern multi-core CPU (e.g., Intel i3/i5/i7/i9, AMD Ryzen)

Software Requirements

  • Operating System: Windows, Mac, and Linux.
    • Linux Ubuntu is often preferred for its stability and performance, but choose an OS you’re comfortable managing.
  • Monero Daemon (monerod ) is essential for running a full node.
  • Repository From the Monero's Github
  • Monero node binaries to interact with the node by using CLI commands
Once you’ve met the requirements, you’re ready to run a Monero Full Node. Further in this guide we’ll explain how to do it with Linux. If you’re seeking for additional or alternative information there are many user guides on Monero official website that you can explore.

Step 1. Configure your ports and firewall

There are two initial ports for the Monero node to connect to
18080 
and
18089 
The difference between them and the use cases for specific needs are described in the image below.
Make sure that port 18080 is open as monerod uses this port to communicate with other nodes on the Monero network.
# By default, deny all incoming and outgoing traffic sudo ufw default deny incoming sudo ufw default allow outgoing # Allow ssh access sudo ufw allow ssh # Allow monerod p2p port sudo ufw allow 18080 # Allow monerod restricted RPC port sudo ufw allow 18089 # Enable firewall sudo ufw enable # Verify status sudo ufw status numbered 
This is the full public node with port 18089, a restricted PRC port.
Then you have to set up the service accounts:
# creates system user account for monero service sudo adduser --system --no-create-home --user-group monero 
This command creates a system user named
monero 
without a home directory
--no-create-home 
Running the Monero node under a dedicated system user rather than a regular user or root enhances security.

Step 2. Create Directories

Then, you need to create some folders the service needs and set their ownership:
# logfile goes here sudo mkdir /valog/monero # blockchain database goes here sudo mkdir /valib/monero # create file for config sudo touch /valib/monero/monerod.conf # set permissions to service account sudo chown -R monero:monero /valib/monero sudo chown -R monero:monero /valog/monero 
These commands create dedicated directories for Monero’s logs (/valog/monero) and blockchain data (/valib/monero). Organizing files in this manner helps in managing and securing data. Using standard directories like /valog for logs and /valib for application data is a common Linux convention, enhancing clarity and consistency across systems.

Step 3: Download the Latest Monero Binaries

To do this, type in the following command:
cd $HOME wget --content-disposition https://downloads.getmonero.org/cli/linux64 
Afteer that, you will need to verify the download hash signature.
#download latest hashes.txt file wget https://www.getmonero.org/downloads/hashes.txt #search hashes.txt file for the computed sha256sum grep -e $(sha256sum monero-linux-x64-*.tar.bz2) hashes.txt 
This command fetches the official list of cryptographic hashes from the Monero website. These hashes are used to verify the integrity and authenticity of the downloaded Monero software. Note that a match appears, and this confirms that the file is valid.
Then, you need to extract tar and copy it to /uslocal/bin
tar -xvf monero-linux-x64-*.tar.bz2 sudo mv monero-x86_64-linux-gnu-*/* /uslocal/bin sudo chown -R monero:monero /uslocal/bin/monero* 
The final substep here is to clean up the files. To do so, enter the following command:
rm monero-linux-x64-*.tar.bz2 rm hashes.txt rm -rf monero-x86_64-linux-gnu-*/ 

Step 4: Configure the Monero Node with a Configuration File

The next crucial step is to add a Configuration File. Using this specific configuration for running a Monero full node on Linux sets various operational parameters that affect how your node interacts with the Monero network, handles data, and manages resources. Here’s the example of what you should ad to the configuration file:
#blockchain data / log locations data-dir=/valib/monero log-file=/valog/monero/monero.log #log options log-level=0 max-log-file-size=0 # Prevent monerod from managing the log files; we want logrotate to take care of that # P2P full node p2p-bind-ip=0.0.0.0 # Bind to all interfaces (the default) p2p-bind-port=18080 # Bind to default port public-node=false # Advertises the RPC-restricted port over p2p peer lists # rpc settings rpc-restricted-bind-ip=0.0.0.0 rpc-restricted-bind-port=18089 # i2p settings tx-proxy=i2p,127.0.0.1:8060 # node settings prune-blockchain=true db-sync-mode=safe # Slow but reliable db writes enforce-dns-checkpointing=true enable-dns-blocklist=true # Block known-malicious nodes no-igd=true # Disable UPnP port mapping no-zmq=true # ZMQ configuration # bandwidth settings out-peers=32 # This will enable much faster sync and tx awareness; the default 8 is suboptimal nowadays in-peers=32 # The default is unlimited; we prefer to put a cap on this limit-rate-up=1048576 # 1048576 kB/s == 1GB/s; a raise from default 2048 kB/s; contribute more to p2p network limit-rate-down=1048576 # 1048576 kB/s == 1GB/s; a raise from default 8192 kB/s; allow for faster initial sync 
By using this configuration you allow your Monero node to accept connections on all network interfaces, enables blockchain pruning (if you want your node to be archival just use the prune-blockchain=false or simply ignore this line), as well es set up needed settings such as bandwidth settings, RPC connection settings, proxe and much more.
Once you’ve set up your config file you need to create a monerod.service systemd unit file. So, the next substep is about creating a systemd service file for running the Monero daemon (monerod) as a system service on Linux. Here’s how you can create a monerod.service file and set it up correctly:
cat > $HOME/monerod.service << EOF [Unit] Description=monerod After=network.target [Service] Type=forking PIDFile=/valib/monero/monerod.pid ExecStart=/uslocal/bin/monerod --config-file /valib/monero/monerod.conf --detach --pidfile /valib/monero/monerod.pid User=monero Group=monero Restart=always RestartSec=5 [Install] WantedBy=multi-user.target EOF 
After creating this file, you would typically move it to /etc/systemd/system/ to make it available as a system service. Then, you can use systemd commands to start, stop, enable (start on boot), or disable the service, like so:
  • Starting a Service: sudo systemctl start monerod.service
  • Stopping a Service: sudo systemctl stop monerod.service
  • Enabling a Service: sudo systemctl enable monerod.service
  • Disabling a Service: sudo systemctl disable monerod.service
  • Checking Service Status: sudo systemctl status monerod.service
  • Restarting a Service: sudo systemctl restart monerod.service

Conclusion

Hopefully, now you have a higher level of understanding when it comes to manually run a Monero (XMR) node, as well as regarding all of the various advantages you get by simply doing so. If the process seems a bit intimidating for you at first, remember that you can always connect to a pre-configured node with NOWNodes. All of the nodes are maintained 24/7, have an uptime of 99.95%, and you can connect to them in less than a second.
Through a Monero node, you can get access to all of the advantages that are offered by it and can secure the network, get access to a high level of privacy, and be a part of the overall ecosystem.
We hope that this guide was helpful 🙏
submitted by nodesprovider to Monero [link] [comments]


2024.04.01 00:56 Br-eezy [USA-NH] [H] - CWWK X86 P5 Fanless Mini PC 16GB DDR5 5600MT/s Intel N100 [W] PayPal, Cash

Fantastic mini-machine that is currently running ProxMox with OPNsense and a Debian VM. Only selling because I'd like more NICs. Crucial is used for both the RAM and NVME drive.
Additionally, there are 2 SATA connections to the main board for SSD storage and a 3rd party fan for additional cooling (although it's not necessary - this thing runs COOL).
Price is $250 OBO + Shipping, or local to 03801, NH. Photo / Timestamp: https://imgur.com/gallery/yofudtQ
CHIP: Intel 12th Gen N100 - 4 Core RAM: Crucial RAM 16GB DDR5 5600MT/s (CT16G56C46S5) NVME: Crucial P3 500GB PCIe (CT500P3SSD8)
Similiar Item: https://www.amazon.com/CWWK-Fanless-PC-Firewall-3-40GHz/dp/B0CHYG3H6F/
submitted by Br-eezy to hardwareswap [link] [comments]


2024.03.25 09:46 BirdMan916 Starting to Build my First HomeLab Environment

Starting to Build my First HomeLab Environment

https://preview.redd.it/523mm93gzfqc1.jpg?width=5712&format=pjpg&auto=webp&s=7ac6537db22794278659e36b1fc2a79298b15b94
After a few years of working in IT as a support analyst, I've finally started to build out my first homelab environment, mostly thanks to some decommissioned tech from work. I'm using it as a chance to self host some handy services & to do some home learning to help with my long term goal of becoming a systems engineer someday.
Here's what I'm running right now;
Primary Box - Fujitsu TX1320 server Xeon(R) CPU E3-1230 with 8tb of storage Running ESXI, this hosts most of my services via an Ubuntu server VM and my personal domain to run as a test environment VSVR1 - Winserver 2022 - Domain controller VSVR2 - WinServer 2022 - MDT Build Server VSVR3 - Ubuntu server - Docker host Running - Twingate, Homebox, Homebridge,Deluge, Portainer and the standard arrr services VM1 - Win10 VM
Secondary box - Thinkcenter M720Q This box also runs ESXI, after seeing a few posts of people slapping an extra NIC in it, I've decided to build this as a routefirewall running PFSense, it also hosts a secondary domain controller (not really a point for this right now, I'm working to come up with other use cases for this box)
Third box - Fujitsu Eprismo This box is running ProxMox at the moment, mostly because ESXI seems to have a hissy fit with the built in NIC, the primary job for this box is to function as a Plex host for encoding.
WDHome Box with 2tb of storage This functions as a file share I've had for a few years, I'm keeping it in use whilst I wait to get my hands on a proper NAS
I've still got a long way to go with setting up my environment, I'm planning to setup my own managed cloud storage service, password manager and knowledge portal, so if anyone has any pointers or suggestions on services to play around with I'm all ears!

submitted by BirdMan916 to homelab [link] [comments]


2024.03.24 12:31 svenvg93 Remote exporters scraping

Hi, i have a noob questions about remote exportes with prometheus. Im working a little project for work to setup up testing probes which we can sent to our customers when they are complaining about speed and latency problems. Or which our business customers can have permanent as an extra service.
The idea is that the probe will do the testing on an interval and the data will will end up a central database with Grafana to show it all.
Our preffred option will be to go with the Prometheus instead of InfluxDB. As we can control the targets from a central point. No need to configure all the probes locally.
The only problem is that the probes will be behind NAT/Firewall so Prometheus can't reach the exporters to scrape. Setting up port forwardings not an option.
So far I have find PushGateway which can sent the metrics but it does not seems to fit our purpose. PushProx might be a good solution for this. The last option is the remote write of Prometheus itself with a Prometheus instance on the location doing the scraping and sending it to a central unit. But it will lose the central target control we would like to have.
What would be a best way to accomplish this?
https://preview.redd.it/868u4p4kp9qc1.jpg?width=990&format=pjpg&auto=webp&s=47c9ef417c223722f44bfb4d0d7ef7b4fc2333b7
submitted by svenvg93 to PrometheusMonitoring [link] [comments]


2024.03.20 05:51 tha_real_rocknrolla Help me optimize and finalize my homelab and NAS setup! Upgrading HDD's in TrueNAS and this post includes my current setup, components, and plans - I'm asking for some suggestions (size and style of PSU and Zimaboard case, etc)

So here's all of my current tech stuff. I built a homelab on a little generic mini PC I got for free (Ezoon) and then built a NAS with all second hand parts except for the case. I'm upgrading to 4x 10TB drives in my NAS and am wondering what else I should try to acquire or how I could use the parts I currently have to take full advantage of what I've got going here. I'd gladly post some pictures, and as you read the list you'll be able to tell like the smaller stuff As you can tell - I've cross posted to minilab :)
So I'm thinking of using the Zimaboard with 4x Sata ports for the old 2TB drives (4x of them) with a drive cage (https://www.amazon.com/LVOERTUIG-Stainless-Cooling-Computer-Adaptedp/B0CGXK9XDD) and I'm obviously cheap and don't mind if it's not perfect. I kind of just want it to be on the smaller side.
Can I/should I use either the Zimaboard or the AceMagic Mini-PC as a cluster for prox mox? Or should they be some sort of firewall/pfsense/openvpn boxes?
Then I've got these misc RAM and NVMe storage:
And the items I mentioned were "free" were obtained thru Amazon Vine - so I essentially pay 20% of the items value thru taxes.
So how can I make the most of what I've got? I'd appreciate any opinions. And what type of PSU/case could I use for the Zimaboard and the 4x 2TB drives? I'm willing to spend some money. But after $300 on drives I'm capping out at $100-$200 for anything else. Refurbs, pre-owned parts are fine with me (just NO Apevia power supplies!)
submitted by tha_real_rocknrolla to homelab [link] [comments]


2024.03.20 05:48 tha_real_rocknrolla Help me optimize and finalize my mini homelab! Includes my current setup, components, and plans - I'm asking for some suggestions (size and style of PSU and Zimaboard case, etc)

So here's all of my current tech stuff. I built a homelab on a little generic mini PC I got for free (Ezoon) and then built a NAS with all second hand parts except for the case. I'm upgrading to 4x 10TB drives in my NAS and am wondering what else I should try to acquire or how I could use the parts I currently have to take full advantage of what I've got going here. I'd gladly post some pictures, and as you read the list you'll be able to tell like the smaller stuff and this is a crosspost from homelab :)
So I'm thinking of using the Zimaboard with 4x Sata ports for the old 2TB drives (4x of them) with a drive cage (https://www.amazon.com/LVOERTUIG-Stainless-Cooling-Computer-Adaptedp/B0CGXK9XDD) and I'm obviously cheap and don't mind if it's not perfect. I kind of just want it to be on the smaller side.
Can I/should I use either the Zimaboard or the AceMagic Mini-PC as a cluster for prox mox? Or should they be some sort of firewall/pfsense/openvpn boxes?
Then I've got these misc RAM and NVMe storage:
And the items I mentioned were "free" were obtained thru Amazon Vine - so I essentially pay 20% of the items value thru taxes.
So how can I make the most of what I've got? I'd appreciate any opinions. And what type of PSU/case could I use for the Zimaboard and the 4x 2TB drives? I'm willing to spend some money. But after $300 on drives I'm capping out at $100-$200 for anything else. Refurbs, pre-owned parts are fine with me (just NO Apevia power supplies!)
submitted by tha_real_rocknrolla to minilab [link] [comments]


2024.03.08 02:53 JimOfThePalouse Network Management by name?

Hi all:
I have a proxmox cluster, and I have a network infrastructure with about 10 vlans defined (I'm a small ISP, I do have different banks of public IPs as well as different internal/firewalled networks, etc). I've got 3 proxmox hosts/servers with a 10Gbps interface into my network, currently with all vlans tagged. However, I'm starting to upgrade some hardware, and in a couple cases due to switches at that location, etc, a vlan may be different than it is in other locations.
On other virtualization platforms I've used in the past (vmware, ovirt, etc), part of setting up a host is defining networks by name and mapping them to an interface/vlan on that host. Thus different hosts can have different methods of getting the vlans to them, but as long as the vlans created in the cluster exist on all hosts, they can be used for migration and HA guests, etc.
Is it possible to do that on ProxMox? I see on 8.x proxmox that there is a "localnetwork" resource listed just above the disks, which gives me the idea that this is in the works, but I don't see any way to define multiple networks by name and assign guests to them explicitly.
Thanks!
submitted by JimOfThePalouse to Proxmox [link] [comments]


2024.03.02 00:35 CrimsonTide5 Would connecting two routers to one switch make sense to separate home lab with VLAN?

Would connecting two routers to one switch make sense to separate home lab with VLAN?
I'm setting up a home network to learn more about networking and study cybersecurity to eventually change careers and pivot into security.
Based on the diagram:
  1. Would the 2nd router (test network) even be necessary if I'm creating VLANs?
  2. With VLANs, would I be able to connect the main PC to any of the devices on the test network?
Does this make any sense at all?
submitted by CrimsonTide5 to homelab [link] [comments]


2024.02.26 13:41 Stunning-Bowler-2698 For Hire: Former Team Lead/Network Engineer/System Engineer with 15 years leadership experience running all technical aspects of an MSP. Also can do Project management and sales and customer account support. Remote or Local to Tampa Bay Area

I have 25 years experience at MSPs. I have built two practices from the ground up and specialize in making sure that the solutions I design are scalable and fit to purpose. This includes designing and developing our offerings and training staff in following best practices. I have multiple references from clients I have had for as long as 15 years.
The thing I am most proud of is that I am a teacher and mentor, capable of finding and developing junior techs by advancing their skillset, and careers.
I have specific expertise in Mikrotik firewalls as well as Sonicwall, Juniper and Fortigate. I have extensive cloud experience in AWS, Google and Azure. In addition, I have developed my own private label cloud hosting practice.
For on prem servers I have extensive experience with all versions of Windows, as well as VMware and ProxMox.
I have a strong background in Windows and Mac OS, as well as linux.
DM me for a full resume.
submitted by Stunning-Bowler-2698 to mspjobs [link] [comments]


2024.02.25 03:52 BrooklynYupster Sharing a network folder from an unprivileged LXC for paperless-ngx consumption

There is plenty of online discourse about mounting and accessing a network folder from an unprivileged ProxMox LXC. This is not one of them.
I am attempting to make a specific folder on the unprivileged LXC write-accessible to my Windows 10 daily driver via Network Share folder. In particular, the folder is the /opt/paperless/consume on unprivileged LXC.
For those unaware, there is a feature of paperless-ngx where you can specify a folder that the service monitors (default above), and when a file is present, it indexes it, ocrs it, copies it to its permanent home in the paperless-ngx data folder, and then deletes the original file. I'd like to be able to drop select files in there throughout my day into a Windows folder trusting that the above actions happen.
My only real constraint is that I want to avoid making any changes to the PVE host because I'm running a hyper-converged HA cluster across "vanilla" hosts that may grow and I don't want to start syncing host configuration.
This debian paperless-ngx LXC container was generated from one of u/tteckster's amazing helper scripts (many kudos and much karma to you!) in case that's relevant.
Here's what I've tried (Thank God for Quick and Easy Snapshots and rollbacks in ProxMox):
  1. Looked into syncthing, duplicati, rsync. Decided to attempt install syncthing on the container, but even though the service ran and I could curl the GUI from within the container, I could not access it in my browser on my Windows machine. Troubleshot for a bit (firewall, etc.). Abandoned.
  2. Began looking into FOSS for sftp clients and services that could run on windows and integrate a remote sFtp into explorer (WinFSP). Paused on this.
  3. Installed Samba using apt and configured following this guide. Came really close. Could access the folder. Could see and open files already created on the LXC host. Could not create, copy, or save changes from Windows. Windows threw a permissions error. Troubleshot for a while (mostly focused on sambauser permissions on the LXC) Abandoned.
  4. Attempted to apt install nfs-kernel-server. It errored out and I troubleshot finally arriving at "of course a container can't run a kernel module like nfs-kernel-server, maybe try a a user space NFS server like Ganesha-NFS?" (I had no idea; the concepts of kernel-space and user-space had not existed in my mind prior). Abandoned.
Paths forward I am mulling over:
  1. Diving deeper into the Windows + sFTP option
  2. Trying Samba again with fresh eyes and troubleshooting the write permission issue.
  3. Starting fresh and using docker to compose a paperless+syncthing container.
Before I research and pursue other paths, wanted to come to this glorious hivemind...

TL;DR What is the simplest and quickest way (without changes to PVE host) to make a folder on an unprivileged LXC write-accessible in Windows explorer.
submitted by BrooklynYupster to Proxmox [link] [comments]


2024.02.22 15:30 JesusXP Sharing existing disks between multiple containers and VM's - is this possible?

Hi all, I am struggling to get something that should be pretty easy to do but I have not been successful in my attempts.

I have a set up, where I am running proxmox and VM's on my /dev/nvme1n1 drive, but I have several already existing disks that I had been using mounted to my linux distro when I was only running Ubuntu natively on this server. Now that I am running proxmox it recognizes that I have these disks still, but it has isolated them.
https://preview.redd.it/l9y0fwswb5kc1.png?width=1877&format=png&auto=webp&s=028a8937ad9bc75ede861ae1b9aacbda8b82b58e
When it comes to adding the existing Disk to a VM, I found a great article from promox directly on how to pass through, and I was able to passthrough my two 20tb drives which host my tv shows and movies to my UbuntuVM which is running my plexmediaserver (i installed it on the VM which has the passed through GPU).

https://preview.redd.it/m5chdzngd5kc1.png?width=1874&format=png&auto=webp&s=36852284f5f611e3ccfe79de555f188c9b10e864
Since I found the tteck github, I installed the rest of my services as containers when previously I was running all the same natively on the original Ubuntu only server, now I am looking to run each as an independent container, but I am not sure if I can share the same disk between multiple containers and VM without an issue. I am also not sure how to share to the CT, because when I tried to use:# pct set 101 -mp0 volume=/dev/sdd,mp=/mnt/media
I had shell to the CT 101, and I made a directory at /mnt/media, then I tried to set the mountpoint to the disk directly, but this command failed.

Is what Im looking to do possible at all? The idea would be the containers could all share access to my local disks on this PROXMOX server. Is there any definitive write up on how to acheive this? Everything im seeing looks super complicated and is doing it by creating new disks, I have existing disks which I just want to mount.
submitted by JesusXP to Proxmox [link] [comments]


2024.02.16 15:28 CAMOBAP795 Is it possible to proxmox trafił through VM?

Currently my proxmox connected directly to IPS router:
IPSRouter <- ProxMox
And I would like to improve it add my own pfSense VM as firewall and OpenWRT VM as WiFi router so ideally I would like to network looks like this:
IPSRouter <- pfSense VM <- OpenWRT VM <- ProxMox
I have a couple questions about this design: 1. Is it possible to implement a such topology? 2. Any downsides of this approach?
submitted by CAMOBAP795 to Proxmox [link] [comments]


2024.01.31 01:38 buenology Proxmox VLAN cannot Ping IP from same VLAN.

Hey Everyone, so I just realized that within Proxmox, my VLAN-169 cannot PING another VLAN-169
Example
IP:10.10.169.5 cannot ping IP:10.10.169.6 and visa-versa.

Here's my config (pretty simple). The hash tags are just for comments, but not in the original sting.

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 1-4094
##10.10.2.2 connects to ProxMox Manager
auto vmbr0.2
iface vmbr0.2 inet static
address 10.10.2.2/24
gateway 10.10.2.254
##Is my testing environment which has been great, but it doesn't ping to another machines on the same VLAN
auto vmbr0.169
iface vmbr0.169 inet static
address 10.10.169.2/24

source /etc/network/interfaces.d/*
Update: After read posts and looking for an answer for days, I read a small posted that was somewhat ignored in the ProxMox forum.
Disable Windows Firewall on the VM within PROXMOX!! That’s what is was, I created Window-based VMs in ProxMox and PCs couldn’t PING each other which was creating issues and boom. Disable Firewall and it’s all pinging and working.
I will re-enable the Windows Firewall for all of the Windows VMs and figure out what’s disabled that is causing so PING, I’m sure it’s the ICMP being blocked, but it’s late now. I’ll test tomorrow, but there you go!!
Maybe the same with Linux? Haven’t tested yet.
submitted by buenology to Proxmox [link] [comments]


2024.01.04 04:23 Windex4Floors Why can I access my apps outside of my network?

Hello!
I've been testing my security on my home network since I use a reverse proxy to access Home Assistant and Plex externally. To set this up, I run NGINX in docker on my NAS and port forward port 443 and port 80.
I had also setup my NAS initially with Quickconnect and had relay enabled. I did this so that I could show my wife how a NAS was an investment to get away from the cloud and have her phone/laptop backup automatically through DS Photos and DS Drive. This has been working fine but I recently disabled the Synology relay expecting for all of the DS apps to stop working externally, but to my surprise everything still works! What is going on here? I dont have uPnP enabled on my router and the DS apps on my external devices are signed in using Quickconnect... Should I be concerned?
Edit: not sure what other info to provide but apparently I didn't put enough details so I will try to add more.
submitted by Windex4Floors to synology [link] [comments]


http://swiebodzin.info