Citrix ica client proxy authentication ntlm

Using a forward proxy server as a Sonatype Nexus repo

2024.05.17 01:00 Outrageous-Machine-5 Using a forward proxy server as a Sonatype Nexus repo

We have a customer request to expose some RHEL packages in CI and our solution was to setup a proxy repo to pull from a mirror, should be a standard use case.
The issue is Sonatype docs for creating a yum proxy will not work for our use case:
  1. because our RHEL instances are managed through licensing through AWS, our RHEL instances are not registered and do not have a subscription attached. However, they don't need this because they have ssl certs to authenticate to various RHUI repos configured in `yum.repos.d`. Because our RHEL instances do not have a subscription attached, there is no entitlement to make the `keystore.p12` file used to authenticate the request in Nexus.
  2. Even if the request was authenticating, Nexus proxy repo only supports one remote url, while `yum.repos.d` have 4 enabled repos to query
  3. RHEL also makes use of a client config server repo to keep the instance and RHEL packages up-to-date. It feels wrong to take the proxy request for repos and separate it from the process Red Hat uses to keep the integrity of their package management.
My idea to resolve this is to setup a RHEL instance that acts as a forward proxy server in our cluster. The idea is this:
When user invokes a yum install, then Nexus forwards the request to proxy, and proxy forwards the request to RHUI, and package is pulled from RHUI and sent back to client.
This should make managing subscription moot, leaving AWS to handle the connectivity and authentication to RHUI, as well as leave the `yum.repos.d` structure intact and referenced with only one yum proxy repo needed in Nexus and still maintain the package integrity provided by the RHEL client config server repo.
So my questions are this: am I on the right track with this approach? Am I correct that Nexus can't handle multiple enabled yum repos without having to making a one-to-one Nexus repo for each yum repo, or how would you handle one Nexus yum repo to many yum repos? And, I'm still really fresh to DevOps and AWS/Kubernetes: how do you point Nexus to this proxy server? We can assume they will be in the same network/cloud/cluster etc but I don't know if there will be extra authentication or a tls handshake needed in order to authenticate the request to the forward proxy? I'm wondering if it's a problem of how pods communicate with one another, but to me I've used a public/private key pair to ever authenticate to my EC2 instances.
I'm also wondering if I still use a proxy repo in Nexus or if I use a hosted repo since we own the RHEL instance? Basically whatever enables us to get these packages
submitted by Outrageous-Machine-5 to devops [link] [comments]


2024.05.16 23:49 HarryPudding careldindiabloleague

Cisco Router Security
What are the two access privilege modes of the Cisco router?
User EXEC Mode: This is the initial access mode for a router. In this mode, the user can access only a limited set of basic monitoring commands.
Privileged EXEC Mode: This mode provides access to all router commands, such as debugging and configuration commands. It requires a password for access to ensure security.
What is the approach for password for the privileged mode of the router?
enable secret [password]
uses hashing algorithm so that the password is not in plain text but encrypted
How to ensure that all passwords in the router are stored in the encrypted form?
service password-encryption
What is the difference between the Cisco router’s startup and running configurations?
How to save the running configuration into start up configuration?
Startup Configuration: Stored in the NVRAM, this configuration is used to boot the router. It remains unchanged until an administrator explicitly saves the running configuration to it.
Running Configuration: Held in the router’s RAM, this configuration is active on the router. Changes to the router’s configuration are made here and are effective immediately.
Know and be able to configure all aspects of the Cisco router covered in class. For example,
configuring the router interfaces, setting the router OSPF ID, etc.
enable
configure terminal
hostname MyRouter
interface GigabitEthernet0/0
ip address 192.168.1.1 255.255.255.0
no shutdown
exit
interface Serial0/0/0
ip address 10.0.0.1 255.255.255.252
clock rate 64000
no shutdown
exit
router ospf 1
router-id 1.1.1.1
network 192.168.1.0 0.0.0.255 area 0
exit
enable secret mysecretpassword
line console 0
password myconsolepassword
login
exit
line vty 0 4
password myvtypassword
login
exit
crypto key generate rsa
ip ssh version 2
ip ssh time-out 60
ip ssh authentication-retries 2
ip route 0.0.0.0 0.0.0.0 192.168.1.254
access-list 10 permit 192.168.1.0 0.0.0.255
access-list 10 deny any
Practical Routing, OSPF, and Security
What is the difference between static and dynamic routing?
Static Routing: Involves manually setting up routes in the router's routing table through configuration commands. These routes do not change unless manually updated or removed. Static routing is simple, secure, and uses less bandwidth but lacks scalability and flexibility.
Dynamic Routing: Automatically adjusts routes in the routing table based on current network conditions using routing protocols. This approach allows for more flexibility, scalability, and fault tolerance, but consumes more resources and can be complex to configure.
What is the difference between link state and distance vector routing?
Distance Vector Routing: Routers using distance vector protocols calculate the best path to a destination based on the distance and direction (vector) to nodes. Updates are shared with neighboring routers at regular intervals or when changes occur. This approach can lead to slower convergence and issues like routing loops.
Link State Routing: Each router learns the entire network topology by exchanging link-state information. Routers then independently calculate the shortest path to every node using algorithms like Dijkstra’s. This results in quicker convergence and fewer routing loops.
Distance Vector Routing: Each router computes distance from itself to its next immediate neighbors. (RIP, EIGRP, & BGP)
-Does not build a full map of the network
-Focuses more on the next hop towards the destination
Link State Routing: Each router shares knowledge of its neighbors with every other router in the network. (OSPF and IS-IS)
-Builds a full map of the network
-Each router shares information
-Maintains a database of the entire network.
Give an example of the distance vector and link state algorithms.
Distance = RIPLink State = OSPF
What type of protocol is Routing Information Protocol (RIP)? Be able to understand
examples and solve problems.
Example of a distance vector protocol
dynamic protocol
-shares routing info with neighboring routers
-an interior gateway protocol that operates within autonomous system
-oldest of all dynamic protocol; RIPv1
-widely used open standard developed by IETF
-a distance vector routing protocol
-limited to maximum 15 hops;
 how rip works -rip sends regular update message (advertisements to neighboring routers) 
-every 30 seconds that resets after each successful ack
-route becomes invalid if it has not received a message for 180 seconds
-RIPv1 (obsolete) uses broadcast, while RIPv2 uses a multicast address -Update message only travel to a single hop
downside : limitations, each router in its table can only have one entry per destination. Have to wait for advertisement for an alternative path, cannot reach hops 15 paths away, little to no security.
What type of protocol is Open Shortest Paths First (OSPF) protocol? Be able to under-
stand examples and solve problems.
-a link state routing protocol
 intra as routing with RIP 
What is the Link State Advertisement (LSA) in OSPF? What is the Link State Database
(LSDB)?
-LSA contains data about a router, its subnets, and some other network information.-OSPF puts all the LSAs from different routers into a Link-State Database (LSDB)
The goal of OSPF is to be able to determine a complete map of the interior routing path to be able to create the best route possible.
The way this is done is that OSPF finds all the routers and subnets that can be reached within the entire network. The result is that each router will have the same information about the network by sending out LSA.
How does each router in OSPF create a map of the entire network?
Step 1 : Acquire neighbor relationship to exchange network information.
Step 2: Exchange database information, neighboring routers swap LSDB information with each other
Step 3: Choosing the best routes, each router chooses the best routes to add to its routing table based on the learned LSDB information.
What is the process for two OSPF routers to become neighbors?
A. a neighbor sends out a Hello packet including the router ID along with subnets that it routes to the given multicast address to a given OSPF area ID.
this is also a way for routers to tell neighbors that they are still on and good to go. 
B. Once other routers receive this packet, they run some checks. The neighboring routers must match the following requirements:
-area id needs to be the same (also used when scaling up OSPF)
-the shared or connecting link should be on the same subnet.
-The Hello and dead timer must be the same.
-the dead timer is having enogh time before the sending router assumes that the neighbor is down.
-this timer is typically 10 secs for point-to-point and broadcast networks.
C. If all is fine, the receiving router will go into Init stage and sends a hello message of its own. This Hello packet list its own network info along with the known neighbor R1. This puts R1 into a 2-way communication status.
D. R1 sends another Hello message to R2 with the information as a known neighbor. This allows the R2 now with a 2-way communication status as well.E. We now have a 2-way neighboring routers
What is the difference between point-to-point and multi-access networks? How does OSPF
handle each case?
Point-to-Point: A network setup where each connection is between two specific nodes or devices. OSPF treats these links with straightforward neighbor relationships since there are only two routers on each segment. 
Multi-Access Networks: Networks where multiple routers can connect on the same segment, such as Ethernet. OSPF uses a Designated Router (DR) and a Backup Designated Router (BDR) on these types of networks to reduce the amount of OSPF traffic and the size of the topological database.
DR selected by the highest OSPF prio.
Be able to configure OSPF routing given a topology.

Example:
Consider a topology with three routers R1, R2, and R3. The routers
are connected R1 =⇒R2 =⇒R3 =⇒R1.
R1 has interface f0/0 connected to the
interface f0/0 of R2. R2 has interface f0/1 connecting to the interface f0/0 of R3.
Finally R3 has interface 1/0 connecting to the interface 1/0 of R3. Assuming all
routers are Cisco 7200 routers, configure them to use OSPF to dynamically route in
this topology (you will be given the Cisco router manual for such questions).

R1enable
configure terminal
hostname R1
interface FastEthernet0/0
ip address 192.168.12.1 255.255.255.0
no shutdown
exit
interface FastEthernet1/0
ip address 192.168.31.1 255.255.255.0
no shutdown
exit
router ospf 1
router-id 1.1.1.1
network 192.168.12.0 0.0.0.255 area 0
network 192.168.31.0 0.0.0.255 area 0
exit
end
write memory
R2enable
configure terminal
hostname R2
interface FastEthernet0/0
ip address 192.168.12.2 255.255.255.0
no shutdown
exit
interface FastEthernet0/1
ip address 192.168.23.1 255.255.255.0
no shutdown
exit
router ospf 1
router-id 2.2.2.2
network 192.168.12.0 0.0.0.255 area 0
network 192.168.23.0 0.0.0.255 area 0
exit
end
write memory
R3enable
configure terminal
hostname R3
interface FastEthernet0/0
ip address 192.168.23.2 255.255.255.0
no shutdown
exit
interface FastEthernet1/0
ip address 192.168.31.2 255.255.255.0
no shutdown
exit
router ospf 1
router-id 3.3.3.3
network 192.168.23.0 0.0.0.255 area 0
network 192.168.31.0 0.0.0.255 area 0
exit
end
write memory
How does OSPF authenticate packets to protect against packet spoofing and tempering?
Be able to enable it a Cisco router.
OSPF (Open Shortest Path First) can authenticate packets to protect against packet spoofing and tampering using several methods. The two main types of authentication are:
Plain Text Authentication: This is simple and provides minimal security. It sends the password in clear text.
Message Digest 5 (MD5) Authentication: This provides stronger security by using cryptographic hash functions to authenticate OSPF packets.
Plain textenable
configure terminal
interface FastEthernet0/0
ip address 192.168.12.1 255.255.255.0
ip ospf authentication
ip ospf authentication-key cisco123
no shutdown
exit
router ospf 1
router-id 1.1.1.1
network 192.168.12.0 0.0.0.255 area 0
area 0 authentication
exit
write memory
MD5enable
configure terminal
interface FastEthernet0/0
ip address 192.168.12.1 255.255.255.0
ip ospf authentication message-digest
ip ospf message-digest-key 1 md5 securepassword
no shutdown
exit
router ospf 1
router-id 1.1.1.1
network 192.168.12.0 0.0.0.255 area 0
area 0 authentication message-digest
exit
write memory
Network Defense Fundamentals

What is IP spoofing? Explain.
-The ip packet contains the source and destination Ip addresses.-Is it straightforward to modify the ip address of the packet.
-IP Spoofing: sender chagrin his source address to something other than his real address.
How can IP spoofing be used in security attacks?
-If the attacker sends an Ip packet with a spoofed IP, they will not receive a response form the destination: the machine with the IP matching the spoofed IP will receive the response.Ip spoofing operation - the sender spoofs the source IP address to point to another target. The receiver system replies to the spoofed IP.

What are the countermeasures to IP spoofing?
Ingress and Egress Filtering: Network operators should implement filtering rules on routers and firewalls to block packets with source IP addresses that should not originate from those networks. Ingress filtering blocks incoming packets with a source IP address that is not valid for the network, while egress filtering blocks outgoing packets with an invalid source IP address.
Reverse Path Forwarding (RPF): This technique ensures that the incoming packets are received on the same interface that the router would use to send traffic back to the source. If the path does not match, the packet is discarded, preventing spoofed packets from passing through.
IPsec (Internet Protocol Security): IPsec can be used to authenticate and encrypt IP packets, ensuring that they come from legitimate sources and have not been tampered with. This makes spoofing attacks significantly more difficult.
How can IP spoofing be used to perform DoS attacks?
IP spoofing is often used in Denial of Service (DoS) attacks to obscure the attacker's identity and to overwhelm the target with traffic from what appears to be multiple sources. One common type of DoS attack that utilizes IP spoofing is a Smurf Attack. In a Smurf Attack, the attacker sends ICMP (Internet Control Message Protocol) echo requests to broadcast addresses of networks, with the source IP address spoofed to that of the victim. The devices on the network respond to the echo requests, sending replies back to the victim's IP address. This amplifies the traffic directed at the victim, potentially overwhelming their network and causing a DoS condition.

Know how to use
hping3
for performing ping floods.
Using hping3 to perform ping floods involves sending a high volume of ICMP Echo Request packets to a target to overwhelm it.basic ping floodsudo hping3 -1 --flood [target_IP]
Using spoofed source ipsudo hping3 -1 --flood -a [spoofed_IP] [target_IP]
Controlling the Packet Sending Rateo hping3 -1 --flood -i u1000 [target_IP]Combining sudo hping3 -1 --flood -a 10.0.0.1 -i u1000 192.168.1.1
Firewalling
What is a firewall?
a filtering device on a network that enforces network security policy and protects the network against external attacks.
According to NIST SP 800-41, what are the characteristics of a firewall?
NIST standard defines the possible characteristics that a firewall can use to filter traffic.
-(IP Address and Protocol type) filtering based on source/destination IP address/ports, traffic direction and other transport layer characteristics.
-(Application Protocols)controls access based on application protocol data
-(User identity) controls access based on user identity
-(Network activity)
What are the limitations of the firewall?
Firewall capabilities: -Define a traffic chokepoint in the network and protects against IP spoofing and routing attacks
-Provide a location for monitoring the security events -Provide non-security functions: loggin internet usage, network address translation-Serve as platform for VPN/IPSec
Firewall limitations:-protect against attacks bypassing the firewall, connections from inside the organization to the outside that do not go through the firewall.-protect against internal threats such as disgruntled employees.
What is a packet filter firewall? Be able to write and interpret rules and to spot configu-
rationflaws.
Packet filtering firewall : applies a set of rules to each packet based on the packet headers.Filters based on: source/destination IP, source/destination port numbers, IP Protocol Field:defines the transport protocol, Interface : for firewalls with 3+ network interfaces, the interface from which the packet came from/going to

What is the difference between the default and allow and default deny policies? Which
one is the more secure one?
-when no rules apply to a packet, a default rule is applied: default deny : what is not explicitly permitted is denied default forward : what is not explicitly denied is allowed
default deny is more secure, you dont have to identify all of the cases that needs to be blocked, if one is missed, default deny will deny it.
Port 0-1023 reserved
1024-2**17 ephemeral
source port used by the system initialiatizng a connection is always chosen from the ephemeral ports
Be able to configure the packet filtering functions of iptables.

Example:
Write iptables rules to block all ICMP traffic to and from the system.
iptables -A INPUT -p icmp -j DROP
iptables -A OUTPUT -p icmp -j DROP
Example:
Write iptables rules to block all traffic on port 22
iptables -A INPUT -p tcp --sport 22 -j DROP
iptables -A OUTPUT -p tcp --dport 22 -j DROP

Example:
Write iptables rules to block traffic to host 192.168.2.2
iptables -A OUTPUT -p tcp --dest 192.168.2.2 -j DROP
iptables -A INPUT -p tcp --src 192.168.2.2 -j DROP
What are the limitations of the packet filter firewall?
-does not examine upper layer data : cannot prevent attacks that employ application specfic vulnerabilities or functions.cannot block application specific commands.

What is the stateful firewall and how does it compare to a packet filter?
A stateful firewall is a network security device that monitors and tracks the state of active connections, making decisions based on the context of the traffic. Unlike a simple packet filter, which examines individual packets in isolation based on predetermined rules, a stateful firewall keeps track of connections over time, distinguishing between legitimate packets that are part of an established session and potentially malicious ones. This contextual awareness allows it to block unauthorized connection attempts and prevent attacks such as spoofing and session hijacking. While packet filters, or stateless firewalls, operate faster and consume fewer resources by applying static rules to each packet independently, they lack the sophisticated traffic pattern handling and enhanced security provided by stateful firewalls.

What is the application-level firewall? What are its advantages and limitations?
An application-level firewall, also known as an application firewall or proxy firewall, operates at the application layer of the OSI model. It inspects and filters traffic based on the specific application protocols (e.g., HTTP, FTP, DNS) rather than just IP addresses and port numbers. limitations : increased communications overhead due to two separate TCP connections
 and not transparent to the client 
Application-level gateways are also known as application-level proxies.
-act as a relay for the application-level traffic.
-runs at the application layer, and examines application-layer data
Supported ProtocolsFTPSTMPHTTP
What is a circuit-level firewall? What are its advantages and limitations?
-Similar to the application-level gateway, but only tracks the state of the TCP/UDP sessions.
-Does not examine application data , simply relays TCP segments
-Allow/deny decisions based on whether a packet belongs to an established and trusted connection
Advantage of circuit-level firewall -do not filter individual packets(simplifies rules)
-fast and efficient 
Disadvantages:
-do not filter individual packets -require frequent updates: traffic is filtered with rules and policies that need regular updates for new threats and risks -the vendor needs to modify the TCP/IP implementation for thor applications to use the circuit-level proxy. 
What are the different approaches to basing the firewall?
-stand-alone machines -software modules in roosters, switches, or servers, or pre-configured security appliances. 
What are the host-based firewalls?
Host-based firewalls: a firewall software module used to secure a single host.
What are the network device firewalls?
Network device firewall = routers and switches often have firewall functions, like packet filtering and stateful inspection, to check and filter packets
What are the virtual firewalls?
-in a virtualized environment, servers, switches, and routers can be virtualized and share physical hardware. The hypervisor that manages the virtual machines can also have firewall capabilities.
What is the DMZ? How is it used for securing networks?
A Demilitarized Zone (DMZ) in network security is a physical or logical subnetwork that contains and exposes an organization's external-facing services to an untrusted network, typically the internet. The primary purpose of a DMZ is to add an additional layer of security to an organization's local area network (LAN). By isolating these externally accessible services, the DMZ ensures that if an attacker gains access to the public-facing systems, they do not have direct access to the rest of the network.
How the DMZ Secures Networks
Isolation of Public Services: Services that need to be accessible from the outside, such as web servers, mail servers, FTP servers, and DNS servers, are placed in the DMZ. These services are isolated from the internal network, which helps protect the internal systems from attacks that may exploit vulnerabilities in the public-facing services.
Controlled Access: Firewalls are used to create boundaries between the internet, the DMZ, and the internal network. The firewall rules are configured to allow only specific types of traffic to and from the DMZ. For example, incoming web traffic might be allowed to reach a web server in the DMZ, but not to access internal systems directly.
Minimal Exposure: Only the necessary services are exposed to the internet. This minimizes the attack surface, reducing the number of entry points that an attacker can exploit. Internal systems and data remain protected behind the additional layer of the firewall.
Layered Security: The DMZ provides an additional layer of defense (defense-in-depth). Even if an attacker manages to compromise a server in the DMZ, the internal network is still protected by another firewall, making it harder for the attacker to penetrate further.
Monitoring and Logging: Activities within the DMZ can be closely monitored and logged. Any suspicious behavior can be detected early, and appropriate actions can be taken to mitigate potential threats before they impact the internal network.
Traffic Filtering: The firewalls between the internet and the DMZ, as well as between the DMZ and the internal network, can filter traffic based on IP addresses, ports, and protocols. This filtering ensures that only legitimate traffic is allowed and that malicious traffic is blocked.
-if attacker compromises a server on the network, they will be able to pivot to other systems on the network.
What are the advantages and disadvantages of having the two DMZ firewalls be from
different vendors?
Using different firewall manufacturers for the two firewalls maybe a good idea, avoids possibility of both having the same vulnerability but introduces more complexity and management overhead.
Be able to write pfSense firewall rules
Penetration Testing

What is penetration testing?
-legal and suthorzied attempt to locate and exploit vulnerable systems for the purpose of making those systems more secure.
pen testing, pt, hacking, ethical hacking, whitehate hacking, offensive security, red teaming 
What is the objective of the penetration testing?
Use tools and techniques used by the attackers in order to discover security vulnerabilities before the attackers do. 
What is the BAD pyramid?
The purpose of a red team is to find ways to improve the blue team, so purple teams should not be needed in an organization where the red/blue teams interaction is healthy and functioning properly. 
red attack
purple defender changes based off attack knowledge
blue defend
green builder changes based on defender knowledge
yellow build
orange builder changes based on attacker knowledge
Why are the penetration tests conducted?
-a company may want to have a stronger understanding of their security footprint.
-system policy shortcomings -network protocol weaknesses -network/software misconfigurations -software vulnerabilities 
What is the difference between penetration testing and vulnerability assessment?
-two terms often incorrectly ,interchangeably used in practice.
-vulnerability assessment : review of systems services to find potential vulnerabilities-penetration testing: finding an exploiting system vulnerabilities as proof-of-concept
What is the difference between black-box, white-box, and grey-box testing.
Black-Box Testing
Tester Knowledge: The tester has no knowledge of the internal structure, code, or implementation details of the system.
-lack knowledge of system
White-Box Testing
Tester Knowledge: The tester has full knowledge of the internal structure, code, and implementation details of the system.
-very thorough , but not completely realistic
Grey-Box Testing
Tester Knowledge: The tester has partial knowledge of the internal structure, code, or implementation details of the system.
What is the difference between ethical and unethical hackers?
-penetration testers, with proper authorization of the company, help improve the security of the company.
-unethical hackers, personal gain through extortion or other devious methods, profit, revenge, fame, etc. No authorization to conduct the attacks
•Ethical vs unethical hacking, penetration testers: obtain the authorization from the organization whose systems they plan to attack unethical hackers: attack without authorization.
Know the stages of penetration testing and the importance of following a structured ap-
proach.

Planning and Reconnaissance:
Planning: Define the scope and goals of the test, including the systems to be tested and the testing methods.
Reconnaissance: Gather information about the target, such as IP addresses, domain names, and network infrastructure, to understand how to approach the test.
Scanning:
Purpose: Identify potential entry points and vulnerabilities in the target system.
Methods: Use tools to scan for open ports, services running on those ports, and known vulnerabilities.
Gaining Access:
Purpose: Attempt to exploit identified vulnerabilities to gain unauthorized access to the system.
Techniques: Use techniques like password cracking, SQL injection, or exploiting software vulnerabilities.
Maintaining Access:
Planning and Reconnaissance:
Purpose: Ensure continued access to the compromised system to understand the potential impact of a prolonged attack.
Methods: Install backdoors or use other methods to maintain control over the system.
Analysis and Reporting:
Scanning
Purpose: Document the findings, including vulnerabilities discovered, methods used, and the level of access achieved.
Report: Provide a detailed report to the organization, highlighting the risks and recommending steps to mitigate the vulnerabilities.
Remediation:
Gaining Access
Purpose: Address and fix the identified vulnerabilities to improve the security of the system.
Action: Implement the recommended security measures from the report to protect against future attacks.
Retesting:
Maintaining Access
Purpose: Verify that the vulnerabilities have been successfully remediated.
Process: Conduct a follow-up test to ensure that the fixes are effective and no new issues have been introduced.
Importance of Following a Structured Approach
Consistency: A structured approach ensures that each stage is systematically followed, making the testing thorough and reliable.
Comprehensiveness: Following each stage helps identify and address all potential vulnerabilities, leaving no gaps in the security assessment.
Documentation: A structured method produces detailed documentation, which is crucial for understanding the security posture and for future reference.
Effectiveness: It ensures that the penetration test effectively mimics real-world attack scenarios, providing valuable insights into how an actual attacker might exploit vulnerabilities.
Risk Management: By identifying and addressing vulnerabilities, organizations can proactively manage security risks and protect their assets from potential attacks.
Example:
What is the difference between the passive and active reconnaissance?

Passive Reconnaissance
Definition: Gathering information about the target without directly interacting with the target system or network. The aim is to collect data without alerting the target.
Methods:
Publicly Available Information: Searching for information that is freely available on the internet, such as social media profiles, company websites, and news articles.
DNS Queries: Looking up domain registration information (WHOIS data), DNS records, and IP address ranges.
Network Traffic Analysis: Capturing and analyzing network traffic without sending packets to the target (e.g., using tools like Wireshark in a non-intrusive manner).
Search Engines: Using search engines to find information about the target, such as employee names, email addresses, and technical details.
Advantages:
Low Risk: Minimizes the chance of detection by the target because no direct interaction occurs.
Stealth: Suitable for the early stages of reconnaissance when the goal is to remain undetected.
Disadvantages:
Limited Information: May not provide as much detailed or specific information about vulnerabilities or configurations as active reconnaissance.
Active Reconnaissance
Definition: Actively engaging with the target system or network to gather information. This involves direct interaction, such as sending packets or probing the target.
Methods:
Network Scanning: Using tools like Nmap to scan for open ports, running services, and network topology.
Vulnerability Scanning: Running vulnerability scanners (e.g., Nessus, OpenVAS) to identify known weaknesses in the target systems.
Social Engineering: Directly interacting with individuals (e.g., phishing attacks) to gather information.
Probing and Enumerating: Sending specific queries or packets to the target to elicit responses that reveal information about the system (e.g., banner grabbing).
Advantages:
Detailed Information: Provides more detailed and specific information about the target's vulnerabilities, configurations, and active services.
Identification of Weaknesses: More effective in identifying exploitable vulnerabilities that can be used in subsequent attack phases.
Disadvantages:
Higher Risk: Increases the risk of detection by the target, which could alert them to the reconnaissance activity.
Potential Legal Issues: Unauthorized active reconnaissance can lead to legal repercussions if done without permission.
Summary
Passive Reconnaissance: Involves gathering information without direct interaction with the target, resulting in lower risk of detection but potentially less detailed information.
Active Reconnaissance: Involves direct interaction with the target to gather detailed information, but carries a higher risk of detection and potential legal consequences.
Both types of reconnaissance are essential in penetration testing to understand the target's environment and identify potential vulnerabilities while balancing the need for stealth and detailed information.
Be able to use the penetration testing tools discussed in class
nmap 192.168.1.1
nmap -sS -sV -O -A 192.168.1.1-sS: Perform a stealth SYN scan.
-sV: Detect service versions.
-O: Detect operating system.
-A: Perform aggressive scan (includes OS detection, version detection, script scanning, and traceroute).
submitted by HarryPudding to u/HarryPudding [link] [comments]


2024.05.16 13:33 samael6 Issues Searching for Movies on Radarr - API Error Help Needed

Hello everyone,
I'm facing a problem when trying to search for any movie on Radarr. Every time I perform a search, I get the following error message: "Search for '' failed. Invalid response received from RadarrAPI."

Problem Details

Steps to Reproduce

  1. Open Radarr.
  2. Go to the search field.
  3. Search for any movie (e.g., 'The Garfield Movie').

Expected Behavior

The searched movie should be found and listed.

Actual Behavior

An error occurs: "Search for '' failed. Invalid response received from RadarrAPI."

Logs

[Fatal] RadarrErrorPipeline: Request Failed. GET /MediaCoverProxy/7d0f1a0a0a793161319bfa3750b44a99d6c403cb92ce2e5f0dd11d0acc38d652/OHGtRqIim2cEHOYKPlbgNOV6Cb.jpg
[v5.6.0.8846] System.Net.Http.HttpRequestException: Resource temporarily unavailable (image.tmdb.org:443)
- System.Net.Sockets.SocketException (11): Resource temporarily unavailable
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
at System.Net.Sockets.Socket.g__WaitForConnectWithCancellation277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.attemptConnection(AddressFamily addressFamily, SocketsHttpConnectionContext context, CancellationToken cancellationToken) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 337
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.onConnect(SocketsHttpConnectionContext context, CancellationToken cancellationToken) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 313
at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.AddHttp2ConnectionAsync(HttpRequestMessage request)
at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttp2ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, Boolean async, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.DecompressionHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.g__Core83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponseAsync(HttpRequest request, CookieContainer cookies) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 115
at NzbDrone.Common.Http.HttpClient.ExecuteRequestAsync(HttpRequest request, CookieContainer cookieContainer) in ./Radarr.Common/Http/HttpClient.cs:line 157
at NzbDrone.Common.Http.HttpClient.ExecuteAsync(HttpRequest request) in ./Radarr.Common/Http/HttpClient.cs:line 70
at NzbDrone.Core.MediaCover.MediaCoverProxy.GetImage(String hash) in ./Radarr.Core/MediaCoveMediaCoverProxy.cs:line 67
at Radarr.Http.Frontend.Mappers.MediaCoverProxyMapper.GetResponse(String resourceUrl) in ./Radarr.Http/Frontend/Mappers/MediaCoverProxyMapper.cs:line 46
at Radarr.Http.Frontend.StaticResourceController.MapResource(String path) in ./Radarr.Http/Frontend/StaticResourceController.cs:line 75
at Radarr.Http.Frontend.StaticResourceController.Index(String path) in ./Radarr.Http/Frontend/StaticResourceController.cs:line 47
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.g__AwaitRequestTask6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Radarr.Http.Middleware.BufferingMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/BufferingMiddleware.cs:line 28
at Radarr.Http.Middleware.IfModifiedMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/IfModifiedMiddleware.cs:line 41
at Radarr.Http.Middleware.CacheHeaderMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/CacheHeaderMiddleware.cs:line 33
at Radarr.Http.Middleware.StartingUpMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/StartingUpMiddleware.cs:line 38
at Radarr.Http.Middleware.UrlBaseMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/UrlBaseMiddleware.cs:line 27
at Radarr.Http.Middleware.VersionMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/VersionMiddleware.cs:line 29
at Microsoft.AspNetCore.ResponseCompression.ResponseCompressionMiddleware.InvokeCore(HttpContext context)
at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.g__Awaited6_0(ExceptionHandlerMiddleware middleware, HttpContext context, Task task)``
Additional Information
I have checked the Pi-hole DNS resolver, and all requests are fine. I also changed the DNS on the host machine to 1.1.1.1 and 8.8.8.8, but the problem persists. Accessing the API directly via the browser (https://image.tmdb.org/) returns a "Bad request" error, indicating that the request might be incorrect or that the connection to the HTTPS server is being closed.
Any help or suggestions would be greatly appreciated!Additional InformationI have checked the Pi-hole DNS resolver, and all requests are fine. I also changed the DNS on the host machine to 1.1.1.1 and 8.8.8.8, but the problem persists. Accessing the API directly via the browser (https://image.tmdb.org/) returns a "Bad request" error, indicating that the request might be incorrect or that the connection to the HTTPS server is being closed.Any help or suggestions would be greatly appreciated!
EDIT:
I already open 2 issues https://github.com/RadarRadarissues/10030 and https://github.com/linuxservedocker-radarissues/229
submitted by samael6 to radarr [link] [comments]


2024.05.16 13:28 icertglobal1 Tips for protecting your business with Microsoft’s cybersecurity tools

https://reddit.com/link/1ctao6n/video/6kom2to1xr0d1/player
In today's digital age, protecting your business from cyber threats is more important than ever. With the increasing number of cyber attacks targeting businesses of all sizes, it is crucial to implement robust cybersecurity measures to safeguard your sensitive data and systems. Fortunately, Microsoft offers a range of cybersecurity tools that can help you enhance your business's security posture and defend against potential threats. In this article, we will discuss some essential tips for protecting your business with Microsoft's cybersecurity tools.
Microsoft's Cybersecurity Tools
Microsoft is a leading provider of cybersecurity solutions, offering a comprehensive suite of tools designed to help businesses protect their data and systems from cyber threats. From advanced threat detection to secure cloud storage, Microsoft's cybersecurity tools can help you defend against a wide range of cyber attacks and keep your business safe and secure.
Key Tips for Protecting Your Business with Microsoft's Cybersecurity Tools
Implement Multi-Factor Authentication: One of the most effective ways to enhance your business's security is by implementing multi-factor authentication for all user accounts. This extra layer of security can help prevent unauthorized access to your systems and data, even if passwords are compromised.
1. Regularly Update Your Software: Keeping your software up to date is essential for protecting your business from security vulnerabilities. Microsoft regularly releases patches and updates to address known security issues, so make sure to install these updates promptly to stay protected.
2. Utilize Microsoft's Advanced Threat Protection: Microsoft offers advanced threat protection solutions that can help you detect and respond to sophisticated cyber threats. By leveraging these tools, you can strengthen your defenses against malware, phishing attacks, and other malicious activities.
3. Secure Your Cloud Environment: If your business uses cloud services, such as Microsoft Azure or Office 365, it is crucial to secure your cloud environment to prevent unauthorized access and data breaches. Microsoft offers robust security tools for cloud environments that can help you protect your sensitive data.
4. Train Your Employees: Human error is one of the leading causes of security breaches, so it is essential to educate your employees about cybersecurity best practices. Microsoft offers cybersecurity training programs that can help your staff identify and mitigate potential threats.
By following these tips and leveraging Microsoft's cybersecurity tools, you can significantly enhance your business's security posture and protect your valuable data and systems from cyber threats. Remember, investing in cybersecurity is not only a smart business decision but also a critical step in safeguarding your business's future. Stay proactive, stay secure, and stay protected with Microsoft's cybersecurity solutions.
How to obtain Microsoft Certification?
We are an Education Technology company providing certification training courses to accelerate careers of working professionals worldwide. We impart training through instructor-led classroom workshops, instructor-led live virtual training sessions, and self-paced e-learning courses.
We have successfully conducted training sessions in 108 countries across the globe and enabled thousands of working professionals to enhance the scope of their careers.
Our enterprise training portfolio includes in-demand and globally recognized certification training courses in Project Management, Quality Management, Business Analysis, IT Service Management, Agile and Scrum, Cyber Security, Data Science, and Emerging Technologies. Download our Enterprise Training Catalog from https://www.icertglobal.com/corporate-training-for-enterprises.php
Popular Courses include:
The 10 top-paying certifications to target in 2024 are:
· Certified Information Systems Security Professional® (CISSP)
· AWS Certified Solutions Architect
· Google Certified Professional Cloud Architect
· Big Data Certification
· Data Science Certification
· Certified In Risk And Information Systems Control (CRISC)
· Certified Information Security Manager(CISM)
· Project Management Professional (PMP)® Certification
· Certified Ethical Hacker (CEH)
· Certified Scrum Master (CSM)
Conclusion
In conclusion, protecting your business with Microsoft's cybersecurity tools is a vital step in safeguarding your data and systems from cyber threats. By implementing robust security measures, staying informed about the latest cyber threats, and leveraging Microsoft's advanced tools, you can significantly enhance your business's security posture and defend against potential attacks. Don't wait until it's too late – start protecting your business today with Microsoft's cybersecurity solutions.
For more information click here
Subscribe to our youtube channel : iCert Global
For Discounts use Coupon Code : INSTANT10
Our Website : www.icertglobal.com
mailto : [sam@icertglobal.org](mailto:sam@icertglobal.org)
Contact us : US: +1(713)-287-1214 IND: +91 988-620-5050
submitted by icertglobal1 to u/icertglobal1 [link] [comments]


2024.05.16 09:47 segdy How can I enable Basic Auth?

I have installed SharePoint 2013 Foundation single instance and I can log in via local username "Administrator".
However, this uses NTLM. Since in future, I would like Sharepoint to be behind an apache reverse proxy, I can't use NTLM. I want to change to HTTP Basic Auth (*).
In IIS, under the Site "Sharepoint - 80", "Authentication", I removed NTLM as a provider from "Windows Authentication". I also enabled "Basic Authentication" and restarted IIS.
While I can still log in with "Administrator" I am not getting "Sorry, this site hasn't been shared with you."
I am sure there is something else I need to do. But what?
(*) First off, this is a playground setup, so not (yet) concerned about security. I also know that credentials for HTTP Basic Auth are in clear text. This is definitely no issue because the reverse proxy only accepts HTTPS and the connection to SharePoint is secure and private (virtual network on same host).
submitted by segdy to sharepoint [link] [comments]


2024.05.16 08:05 notscottsmith Apache Guacamole and SSH Clients

Hi all,
I have an environment for my employer that we're really happy with - Guac is setup with a reverse proxy in front to handle SSO/MFA and pushes the user into the right VM with certificate authentication all via the browser with shell access.
What I'm wondering is if there is a terminal client solution (like a Putty or MobaXterm) around that allows for adding hosts in and connecting over https (like guac) but in a client terminal environment?
Currently different hosts exist in my browser like regular bookmarks but if I could have that in a terminal client instead, it would make my life just a little bit easier (and happier).
Or, alternatively, are there any environment setups which would allow for such a thing while still maintaining the MFA/SSO process (we use Azure so it would have to involve a browser to do it, I'm guessing)?
Cheers in advance for the knowledge.
submitted by notscottsmith to sysadmin [link] [comments]


2024.05.14 12:21 rweninger Nextcloud Upgrade fron chart version 1.6.61 to 2.0.5 failed

I am not sure if I want to solve this issue actually, I just want to vent.
iX, what do you think yourself when you print out this error message to a "customer"?
I mean your installation of Kubernetes on a single host is crap and using helm charts that utterly break in an atomic chain reaction that way doesnt make it trustworthy. I am on the way to migrate nextcloud away again from TrueNAS to a docker host and just use TrueNAS as storage.
I dont care about sensible data down there, at the time of posting, this system isnt running anymore. Sorry if I annoy somebody.
[EFAULT] Failed to upgrade App: WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/ranchek3s/k3s.yaml Error: UPGRADE FAILED: execution error at (nextcloud/templates/common.yaml:38:4): Chart - Values contain an error that may be a result of merging. Values containing the error: Error: 'error converting YAML to JSON: yaml: invalid leading UTF-8 octet' TZ: UTC bashImage: pullPolicy: IfNotPresent repository: bash tag: 4.4.23 configmap: nextcloud-config: data: limitrequestbody.conf: LimitRequestBody 3221225472 occ: - #!/bin/bash uid="$(id -u)" gid="$(id -g)" if [ "$uid" = '0' ]; then user='www-data' group='www-data' else user="$uid" group="$gid" fi run_as() { if [ "$(id -u)" = 0 ]; then su -p "$user" -s /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" else /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" fi } run_as "$@" opcache.ini: opcache.memory_consumption=128 php.ini: max_execution_time=30 enabled: true nginx: data: nginx.conf: - events {} http { server { listen 9002 ssl http2; listen [::]:9002 ssl http2; # Redirect HTTP to HTTPS error_page 497 301 =307 https://$host$request_uri; ssl_certificate '/etc/nginx-certs/public.crt'; ssl_certificate_key '/etc/nginx-certs/private.key'; client_max_body_size 3G; add_header Strict-Transport-Security "max-age=15552000; includeSubDomains; preload" always; location = /robots.txt { allow all; log_not_found off; access_log off; } location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } location / { proxy_pass http://nextcloud:80; proxy_http_version 1.1; proxy_cache_bypass $http_upgrade; proxy_request_buffering off; # Proxy headers proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port 443; # Proxy timeouts proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; } } } enabled: true fallbackDefaults: accessModes: - ReadWriteOnce persistenceType: emptyDir probeTimeouts: liveness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readiness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 2 timeoutSeconds: 5 startup: failureThreshold: 60 initialDelaySeconds: 10 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 probeType: http pvcRetain: false pvcSize: 1Gi serviceProtocol: tcp serviceType: ClusterIP storageClass: "" global: annotations: {} ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 labels: {} minNodePort: 9000 image: pullPolicy: IfNotPresent repository: nextcloud tag: 29.0.0 imagePullSecret: [] ixCertificateAuthorities: {} ixCertificates: "1": CA_type_existing: false CA_type_intermediate: false CA_type_internal: false CSR: null DN: /C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com/ST=Tennessee/L=Maryville/subjectAltName=DNS:localhost can_be_revoked: false cert_type: CERTIFICATE cert_type_CSR: false cert_type_existing: true cert_type_internal: false certificate: -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- certificate_path: /etc/certificates/truenas_default.crt chain: false chain_list: - -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- city: Maryville common: localhost country: US csr_path: /etc/certificates/truenas_default.csr digest_algorithm: SHA256 email: info@ixsystems.com expired: false extensions: ExtendedKeyUsage: TLS Web Server Authentication SubjectAltName: DNS:localhost fingerprint: 8E:68:9D:0A:7D:A6:41:11:59:B0:0C:01:8C:AC:C4:F4:DB:F9:6B:2C from: Sat Dec 16 08:05:09 2023 id: 1 internal: "NO" issuer: external key_length: 2048 key_type: RSA lifetime: 397 name: truenas_default organization: iXsystems organizational_unit: null parsed: true privatekey: -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCj0Td5+Z4Chax0 NdoCgpixDfOSH+gdxChKOJ705EyFi812XExsZSfZBq/W7dFfHLxCJ2Zzgp8mO+Jc ypTIWb9UefJ3wZs+WoKVerHB12VYIRoRrI7T+/x+D0TFG8zPTGsBE/PgLvIK4wKH VJM+ibJHMIYIPRFUuVED0w5MVyWBUVDdV4STkeLarKxxayNuaFBehprPVqdWqtJi V4fhSBgMhC6qXZP0PDdUY597eJtlG/jj5gS8Bzp3CKs9jx1YC+e+G0/6vi3NzIv4 ELHzYulTe1JDU0OUNrjoa3MOILY5+Nb9I+5PQrFLzyvM3wos0B4AceiId92fOdKR 1EMjbsIxAgMBAAECggEAS/Su51RxCjRWwM9TVUSebcHNRNyccGjKUZetRFkyjd1D l/S1zrCcaElscJh2MsaNF5NTMo3HIyAzFdksYTUTvKSKYzKWu7OVxp9MGle3+sPm ZXmABBRbf0uvFEGOljOVjbtloXXC7n9RZdQ2LZIE4nNCQkGmboU6Zi6O+6CQmEOQ 9iyYJ8NyXtjDT2sVOpysAj3ga6tdtSosG7SQuo41t20mw6hbl08LhQP9LfZJyKCR 0x1cYny+XHifB6JQAt8crzHYpKaJc2tZd4dXJ1xDnm2Aa/Au5uEA01P/L3hf41sI cUmBhVf1z5m9yBsyaZnW6LzaR5tQwpnPWPEcNfuwLQKBgQDM1o8vwKCo435shpGE zCdqbvK4+J0XYmbgEwHId8xr9rzZ852lAhs6VO2WQQVMGUoWRaH44B3z1Jv9N5Qa 4RUwnTb1MERfzEjRwUuIWjtz34yAXko0iU3M0FYpIxDuKVJNOEO1Doey0lTUcIYQ sfRUVxxJZ3hpDo7RhPSZpwyBtwKBgQDMu8PFVQ5XRb90qaGqg+ACaXMfHXfuWzuJ UqgyNrvF6wqd9Z0Nn299m7EonE6qJftUqlqHC62OCBfqRBNkwOw40s7ORZvqUCkP 7WsWuJu4HqhS2we8yKRuqj520VP537ZeqnK64mDxDKBvL9ttCujbxy01WFWcdwkO sSAViAK7VwKBgQCAeNG1kYsyYfyY9I2wTJssFgoGGWftkroTL9iecwSzcj1gNXta Usfg/gNFieJYqEPfVC0Sev5OP7rWRlWNxj4UD4a4oV1A+E9zv1gwXOeM9ViZ6omA Cd3R55kik+u6dBA6fl9433Qco+6wjyKGthYYD8qd/1d2DLtmjY0cEbm2YQKBgH4/ Zuifm5lLhFVPaUa5zYAPQJM2W8da8OqsUtWsFLxmRQTE+ZT19Q1S3br6MDQR+drq tapDFEHaUcz/L6pYoRIlRKvEFvI1fiy5Lekz66ptFUUKlcnfPC6VwrEIQi16u33C w77ka/0Y2THXJAsoyBEG0KTtlNVIPgiWRv+gAHc/AoGATOlO6ZVhf0vWPIKBhajM ijWTNIX/iCNOheJEjLEPksG4LVpU16OphZL2m0nIyOryQ0Fmt7GHUfl3CXFhTH/P G47PzH+mLCQLp5TUIeNRQWScWNGGsf9J+MtwpxHMzUymDJySR4aot0bH3fge0MO1 QccFxNbLODRmJuYbSQB1HZQ= -----END PRIVATE KEY----- privatekey_path: /etc/certificates/truenas_default.key revoked: false revoked_date: null root_path: /etc/certificates san: - DNS:localhost serial: 477224443 signedby: null state: Tennessee subject_name_hash: 3193428416 type: 8 until: Thu Jan 16 08:05:09 2025 ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 ixExternalInterfacesConfiguration: [] ixExternalInterfacesConfigurationNames: [] ixVolumes: - hostPath: /mnt/Camelot/ix-applications/releases/nextcloud/volumes/ix_volumes/ix-postgres_backups mariadbImage: pullPolicy: IfNotPresent repository: mariadb tag: 10.6.14 ncConfig: additionalEnvs: [] adminPassword: d3k@M%YRBRcj adminUser: admin commands: [] cron: enabled: false schedule: '*/15 * * * *' dataDir: /vawww/html/data host: charon.weninger.local maxExecutionTime: 30 maxUploadLimit: 3 opCacheMemoryConsumption: 128 phpMemoryLimit: 512 ncDbHost: nextcloud-postgres ncDbName: nextcloud ncDbPass: XvgIoT84hMmNDlH ncDbUser: ��-��� ncNetwork: certificateID: 1 nginx: externalAccessPort: 443 proxyTimeouts: 60 useDifferentAccessPort: false webPort: 9002 ncPostgresImage: pullPolicy: IfNotPresent repository: postgres tag: "13.1" ncStorage: additionalStorages: [] data: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: data type: hostPath html: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: html type: hostPath isDataInTheSameVolume: true migrationFixed: true pgBackup: ixVolumeConfig: aclEnable: false datasetName: ix-postgres_backups type: ixVolume pgData: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata ixVolumeConfig: datasetName: pgData type: hostPath nginxImage: pullPolicy: IfNotPresent repository: nginx tag: 1.25.4 notes: custom: ## Database You can connect to the database using the pgAdmin App from the catalog
Database Details
- Database: \{{ .Values.ncDbName }}` - Username: `{{ .Values.ncDbUser }}` - Password: `{{ .Values.ncDbPass }}` - Host: `{{ .Values.ncDbHost }}.{{ .Release.Namespace }}.svc.cluster.local` - Port: `5432``
{{- $_ := unset .Values "ncDbUser" }} {{- $_ := unset .Values "ncDbName" }} {{- $_ := unset .Values "ncDbPass" }} {{- $_ := unset .Values "ncDbHost" }} Note: Nextcloud will create an additional new user and password for the admin user on first startup. You can find those credentials in the \/vawww/html/config/config.php` file inside the container. footer: # Documentation Documentation for this app can be found at https://www.truenas.com/docs. # Bug reports If you find a bug in this app, please file an issue at https://ixsystems.atlassian.net header: # Welcome to TrueNAS SCALE Thank you for installing {{ .Chart.Annotations.title }} App. persistence: config: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/config subPath: config nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/config subPath: config type: hostPath username: null customapps: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/customapps subPath: custom_apps nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/custom_apps subPath: custom_apps type: hostPath username: null data: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/data subPath: data nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/data subPath: data type: hostPath username: null html: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html subPath: html nextcloud-cron: nextcloud-cron: mountPath: /vawww/html subPath: html postgresbackup: postgresbackup: mountPath: /nc-config type: hostPath username: null nc-config-limreqbody: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /etc/apache2/conf-enabled/limitrequestbody.conf subPath: limitrequestbody.conf type: configmap nc-config-opcache: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/opcache-z-99.ini subPath: opcache.ini type: configmap nc-config-php: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/nextcloud-z-99.ini subPath: php.ini type: configmap nc-occ: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /usbin/occ subPath: occ type: configmap nginx-cert: defaultMode: "0600" enabled: true items: - key: tls.key path: private.key - key: tls.crt path: public.crt objectName: nextcloud-cert targetSelector: nginx: nginx: mountPath: /etc/nginx-certs readOnly: true type: secret nginx-conf: defaultMode: "0600" enabled: true items: - key: nginx.conf path: nginx.conf objectName: nginx targetSelector: nginx: nginx: mountPath: /etc/nginx readOnly: true type: configmap postgresbackup: datasetName: ix-postgres_backups domain: null enabled: true hostPath: null medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgresbackup: permissions: mountPath: /mnt/directories/postgres_backup postgresbackup: mountPath: /postgres_backup type: ixVolume username: null postgresdata: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgres: permissions: mountPath: /mnt/directories/postgres_data postgres: mountPath: /valib/postgresql/data type: hostPath username: null themes: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/themes subPath: themes nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/themes subPath: themes type: hostPath username: null tmp: enabled: true targetSelector: nextcloud: nextcloud: mountPath: /tmp type: emptyDir podOptions: automountServiceAccountToken: false dnsConfig: options: [] dnsPolicy: ClusterFirst enableServiceLinks: false hostAliases: [] hostNetwork: false restartPolicy: Always runtimeClassName: "" terminationGracePeriodSeconds: 30 tolerations: [] portal: {} postgresImage: pullPolicy: IfNotPresent repository: postgres tag: "15.2" rbac: {} redisImage: pullPolicy: IfNotPresent repository: bitnami/redis tag: 7.0.11 release_name: nextcloud resources: NVIDIA_CAPS: - all limits: cpu: 4000m memory: 8Gi requests: cpu: 10m memory: 50Mi scaleCertificate: nextcloud-cert: enabled: true id: 1 scaleExternalInterface: [] scaleGPU: [] secret: {} securityContext: container: PUID: 568 UMASK: "002" allowPrivilegeEscalation: false capabilities: add: [] drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 568 runAsNonRoot: true runAsUser: 568 seccompProfile: type: RuntimeDefault pod: fsGroup: 568 fsGroupChangePolicy: OnRootMismatch supplementalGroups: [] sysctls: [] service: nextcloud: enabled: true ports: webui: enabled: true port: 80 primary: true targetPort: 80 targetSelector: nextcloud primary: true targetSelector: nextcloud type: ClusterIP nextcloud-nginx: enabled: true ports: webui-tls: enabled: true nodePort: 9002 port: 9002 targetPort: 9002 targetSelector: nginx targetSelector: nginx type: NodePort postgres: enabled: true ports: postgres: enabled: true port: 5432 primary: true targetPort: 5432 targetSelector: postgres targetSelector: postgres type: ClusterIP redis: enabled: true ports: redis: enabled: true port: 6379 primary: true targetPort: 6379 targetSelector: redis targetSelector: redis type: ClusterIP serviceAccount: {} workload: nextcloud: enabled: true podSpec: containers: nextcloud: enabled: true envFrom: - secretRef: name: nextcloud-creds imageSelector: image lifecycle: postStart: command: - /bin/sh - -c - echo "Installing ..." apt update && apt install -y --no-install-recommends \ echo "Failed to install binary/binaries..." echo "Finished." type: exec primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: postgres-wait: args: - -c - echo "Waiting for postgres to be ready" until pg_isready -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB}; do sleep 2 done command: bash enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: postgresImage resources: limits: cpu: 500m memory: 256Mi type: init redis-wait: args: - -c - - echo "Waiting for redis to be ready" until redis-cli -h "$REDIS_HOST" -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG; do echo "Waiting for redis to be ready. Sleeping 2 seconds..." sleep 2 done echo "Redis is ready!" command: bash enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage resources: limits: cpu: 500m memory: 256Mi type: init securityContext: fsGroup: 33 primary: true type: Deployment nginx: enabled: true podSpec: containers: nginx: enabled: true imageSelector: nginxImage primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: 01-wait-server: args: - -c - - echo "Waiting for [http://nextcloud:80]"; until wget --spider --quiet --timeout=3 --tries=1 http://nextcloud:80/status.php; do echo "Waiting for [http://nextcloud:80]"; sleep 2; done echo "Nextcloud is up: http://nextcloud:80"; command: - bash enabled: true imageSelector: bashImage type: init type: Deployment postgres: enabled: true podSpec: containers: postgres: enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: ncPostgresImage primary: true probes: liveness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec readiness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec startup: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n fix_owner=\"true\"\n fix_perms=\"true\"\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n \ chown -R 999:999 \"$dir\"\n echo \"Finished changing ownership\"\n \ echo \"Ownership after changes:\"\n stat -c \"%u %g\" \"$dir\"\n \ fi\ndone\n" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: install type: Deployment postgresbackup: annotations: helm.sh/hook: pre-upgrade helm.sh/hook-delete-policy: hook-succeeded helm.sh/hook-weight: "1" enabled: true podSpec: containers: postgresbackup: command: - sh - -c - echo 'Fetching password from config.php' # sed removes ' , => spaces and db from the string POSTGRES_USER=$(cat /nc-config/config/config.php grep 'dbuser' sed "s/dbuser ',=>//g") POSTGRES_PASSWORD=$(cat /nc-config/config/config.php grep 'dbpassword' sed "s/dbpassword ',=>//g") POSTGRES_DB=$(cat /nc-config/config/config.php grep 'dbname' sed "s/dbname ',=>//g") [ -n "$POSTGRES_USER" ] && [ -n "$POSTGRES_PASSWORD" ] && [ -n "$POSTGRES_DB" ] && echo 'User, Database and password fetched from config.php' until pg_isready -U ${POSTGRES_USER} -h ${POSTGRES_HOST}; do sleep 2; done echo "Creating backup of ${POSTGRES_DB} database" pg_dump --dbname=${POSTGRES_URL} --file /postgres_backup/${POSTGRES_DB}$(date +%Y-%m-%d_%H-%M-%S).sql echo "Failed to create backup" echo "Backup finished" enabled: true envFrom: - secretRef: name: postgres-backup-creds imageSelector: ncPostgresImage primary: true probes: liveness: enabled: false readiness: enabled: false startup: enabled: false resources: limits: cpu: 2000m memory: 2Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/*; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n if [ $(stat -c %u \"$dir\") -eq 999 ] && [ $(stat -c %g \"$dir\") -eq 999 ]; then\n echo \"Ownership is correct. Skipping...\"\n fix_owner=\"false\"\n \ else\n echo \"Ownership is incorrect. Fixing...\"\n fix_owner=\"true\"\n \ fi\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n chown -R 999:999 \"$dir\"\n \ echo \"Finished changing ownership\"\n echo \"Ownership after changes:\"\n \ stat -c \"%u %g\" \"$dir\"\n fi\ndone" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: init restartPolicy: Never securityContext: fsGroup: "33" type: Job redis: enabled: true podSpec: containers: redis: enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage primary: true probes: liveness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec readiness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec startup: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 1001 securityContext: fsGroup: 1001 type: Deployment See error above values.`
submitted by rweninger to truenas [link] [comments]


2024.05.14 07:44 Murky_Egg_5794 CORS not working for app in Docker but work when run on simple dotnet command

Hello everyone, I am totally new to Docker and I have been stuck on this for around 5 days now. I have a web app where my frontend is using react and Node.js and my backend is using C#, aspNet, to run as a server.
I have handled CORS policy blocking as below for my frontend (running on localhost:3000) to communicate with my backend (running on localhost:5268), and they work fine.
The code that handles CORS policy blocking:
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; var builder = WebApplication.CreateBuilder(args); builder.Services.AddCors(options => { options.AddPolicy(name: MyAllowSpecificOrigins, policy => { policy.WithOrigins("http://localhost:3000/") .AllowAnyMethod() .AllowAnyHeader(); }); }); builder.Services.AddControllers(); builder.Services.AddHttpClient(); var app = builder.Build(); app.UseHttpsRedirection(); app.UseCors(MyAllowSpecificOrigins); app.UseAuthorization(); app.MapControllers(); app.Run(); 
However, when I implement Docker into my code and run the command docker run -p 5268:80 App to start Docker of my backend, I received an error on my browser:
Access to XMLHttpRequest at 'http://localhost:5268/news' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. 
I add Krestrel to appsetting.json to change the base service port as below:
 "Kestrel": { "EndPoints": { "Http": { "Url": "http://+:80" } } } 
Here is my Dockerfile:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll" ] 
Here is my launchSettings.json file's content:
{ "_comment": "For devEnv: http://localhost:5268 and for proEnv: https://kcurr-backend.onrender.com", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:19096", "sslPort": 44358 } }, "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:7217;http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }, } 
I did some research on this and found that I need to use NGINX to fixed it, so I add nginx.conf and tell docker to read nginx.config as well as below:
now my Dockerfile only has:
# Read NGIXN config to fixed CORS policy blocking FROM nginx:alpine WORKDIR /etc/nginx COPY ./nginx.conf ./conf.d/default.conf EXPOSE 80 ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ]mcr.microsoft.com/dotnet/sdk:7.0mcr.microsoft.com/dotnet/sdk:7.0 
here is nginx.conf:
upstream api { # Could be host.docker.internal - Docker for Mac/Windows - the host itself # Could be your API in a appropriate domain # Could be other container in the same network, like container_name:port server 5268:80; } server { listen 80; server_name localhost; location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Content-Type' 'application/json'; add_header 'Content-Length' 0; return 204; } add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; proxy_pass http://api/; } } 
when I build docker by running: docker build -t kcurr-backend . and then running command docker run -p 5268:80 kcurr-backend, no error shown on console as below:
2024/05/14 05:58:36 [notice] 1#1: using the "epoll" event method 2024/05/14 05:58:36 [notice] 1#1: nginx/1.25.5 2024/05/14 05:58:36 [notice] 1#1: built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014) 2024/05/14 05:58:36 [notice] 1#1: OS: Linux 6.6.22-linuxkit 2024/05/14 05:58:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/14 05:58:36 [notice] 1#1: start worker processes 2024/05/14 05:58:36 [notice] 1#1: start worker process 7 2024/05/14 05:58:36 [notice] 1#1: start worker process 8 2024/05/14 05:58:36 [notice] 1#1: start worker process 9 2024/05/14 05:58:36 [notice] 1#1: start worker process 10 2024/05/14 05:58:36 [notice] 1#1: start worker process 11 2024/05/14 05:58:36 [notice] 1#1: start worker process 12 2024/05/14 05:58:36 [notice] 1#1: start worker process 13 2024/05/14 05:58:36 [notice] 1#1: start worker process 14 
However, I still cannot connect my frontend to my backend and received the same error on the browser as before, I also received a new error on the console as below :
2024/05/14 05:58:42 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "GET /curcurrency-country HTTP/1.1", upstream: "http://0.0.20.148:80/curcurrency-country", host: "localhost:5268", referrer: "http://localhost:3000/" 2024/05/14 05:58:42 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "POST /news HTTP/1.1", upstream: "http://0.0.20.148:80/news", host: "localhost:5268", referrer: "http://localhost:3000/" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "POST /news HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "GET /curcurrency-country HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 
Does anyone know what I should do to fix the CORS policy blocking for my dockerized backend?
please help.
submitted by Murky_Egg_5794 to dotnetcore [link] [comments]


2024.05.14 07:38 Murky_Egg_5794 CORS not working for app in Docker but work when run on simple dotnet command

Hello everyone, I am totally new to Docker and I have been stuck on this for around 5 days now. I have a web app where my frontend is using react and node.js and my backend is using C#, aspNet, to tun as server.
I have handled CORS policy blocking as below for my frontend (running on localhost:3000) to communicate with my backend (running on localhost:5268), and they work fine.
The code that handles CORS policy blocking:
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; var builder = WebApplication.CreateBuilder(args); builder.Services.AddCors(options => { options.AddPolicy(name: MyAllowSpecificOrigins, policy => { policy.WithOrigins("http://localhost:3000/") .AllowAnyMethod() .AllowAnyHeader(); }); }); builder.Services.AddControllers(); builder.Services.AddHttpClient(); var app = builder.Build(); app.UseHttpsRedirection(); app.UseCors(MyAllowSpecificOrigins); app.UseAuthorization(); app.MapControllers(); app.Run(); 
However, when I implement Docker into my code and run the command docker run -p 5268:80 kcurr-backend to start Docker of my backend, I received an error on my browser:
Access to XMLHttpRequest at 'http://localhost:5268/news' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. 
I add Krestrel to appsetting.json to change the base service port as below:
 "Kestrel": { "EndPoints": { "Http": { "Url": "http://+:80" } } } 
Here is my Dockerfile:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll" ]mcr.microsoft.com/dotnet/sdk:7.0mcr.microsoft.com/dotnet/sdk:7.0 
Here is my launchSettings.json file's content:
{ "_comment": "For devEnv: http://localhost:5268 and for proEnv: https://kcurr-backend.onrender.com", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:19096", "sslPort": 44358 } }, "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:7217;http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }, } 
I did some research on this and found that I need to use NGINX to fixed it, so I add nginx.conf and tell docker to read nginx.config as well as below:
now my Dockerfile has additional section:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll", "--launch-profile Prod" ] # Read NGIXN config to fixed CORS policy blocking FROM nginx:alpine WORKDIR /etc/nginx COPY ./nginx.conf ./conf.d/default.conf EXPOSE 80 ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] 
here is nginx.conf:
upstream api { # Could be host.docker.internal - Docker for Mac/Windows - the host itself # Could be your API in a appropriate domain # Could be other container in the same network, like container_name:port server 5268:80; } server { listen 80; server_name localhost; location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Content-Type' 'application/json'; add_header 'Content-Length' 0; return 204; } add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; proxy_pass http://api/; } } 
when I build docker by running: docker build -t kcurr-backend . and then running command docker run -p 5268:80 kcurr-backend, no error shown on console as below:
2024/05/14 05:58:36 [notice] 1#1: using the "epoll" event method 2024/05/14 05:58:36 [notice] 1#1: nginx/1.25.5 2024/05/14 05:58:36 [notice] 1#1: built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014) 2024/05/14 05:58:36 [notice] 1#1: OS: Linux 6.6.22-linuxkit 2024/05/14 05:58:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/14 05:58:36 [notice] 1#1: start worker processes 2024/05/14 05:58:36 [notice] 1#1: start worker process 7 2024/05/14 05:58:36 [notice] 1#1: start worker process 8 2024/05/14 05:58:36 [notice] 1#1: start worker process 9 2024/05/14 05:58:36 [notice] 1#1: start worker process 10 2024/05/14 05:58:36 [notice] 1#1: start worker process 11 2024/05/14 05:58:36 [notice] 1#1: start worker process 12 2024/05/14 05:58:36 [notice] 1#1: start worker process 13 2024/05/14 05:58:36 [notice] 1#1: start worker process 14 
However, I still cannot connect my frontend to my backend and received the same error on browser as before, I also received a new error on the console as below :
2024/05/14 05:58:42 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "GET /curcurrency-country HTTP/1.1", upstream: "http://0.0.20.148:80/curcurrency-country", host: "localhost:5268", referrer: "http://localhost:3000/" 2024/05/14 05:58:42 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "POST /news HTTP/1.1", upstream: "http://0.0.20.148:80/news", host: "localhost:5268", referrer: "http://localhost:3000/" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "POST /news HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "GET /curcurrency-country HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 
Does anyone know what I should do to fix the CORS policy blocking for my dockerized backend?
please help.
submitted by Murky_Egg_5794 to docker [link] [comments]


2024.05.12 18:52 4ronse Exposing HA with a Cloudflared Tunnel

Hello, I'm trying to access HA remotly using a Cloudflared Tunnel. Both the HA and the tunnel are running on seperate LXCs on my proxmox machine.
I tried running the cloudflared addon by brenner-tobias, got the same result.
configuration.yaml:
# Loads default set of integrations. Do not remove. default_config: # Http shit http: use_x_forwarded_for: true base_url: https://[HA public URL] ip_ban_enabled: false trusted_proxies: - 192.168.1.0/24 - ff80::/64 # === CLOUDFLARE === # IPv4 - 173.245.48.0/20 - 103.21.244.0/22 - 103.22.200.0/22 - 103.31.4.0/22 - 141.101.64.0/18 - 108.162.192.0/18 - 190.93.240.0/20 - 188.114.96.0/20 - 197.234.240.0/22 - 198.41.128.0/17 - 162.158.0.0/15 - 104.16.0.0/13 - 104.24.0.0/14 - 172.64.0.0/13 - 131.0.72.0/22 # IPv6 - 2400:cb00::/32 - 2606:4700::/32 - 2803:f800::/32 - 2405:b500::/32 - 2405:8100::/32 - 2a06:98c0::/29 - 2c0f:f248::/32 homeassistant: [private info :)] logger: logs: custom_components.ttlock: debug homeassistant.components.http: debug sensor: [some sensors]192.168.1.0/24173.245.48.0/20103.21.244.0/22103.22.200.0/22103.31.4.0/22141.101.64.0/18108.162.192.0/18190.93.240.0/20188.114.96.0/20197.234.240.0/22198.41.128.0/17162.158.0.0/15104.16.0.0/13104.24.0.0/14172.64.0.0/13131.0.72.0/22 

Load frontend themes from the themes folder

frontend:
themes: !include_dir_merge_named themes
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml
HA Log:
2024-05-12 19:44:45.198 WARNING (MainThread) [homeassistant.components.http.ban] Login attempt or request with invalid authentication from cloudflared.local (192.168.1.67). Requested URL: '/auth/token'. (Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36) 
/auth/token Response:
Request URL: https://[HA public url]/auth/token Request Method: POST Status Code: 400 Bad Request Remote Address: [Cloudflare's IP] Referrer Policy: same-origin {"error":"invalid_request","error_description":"Invalid code"} 
/auth/token Request:
client_id: https://[HA public url]/ code: [auth code] grant_type: authorization_code 
Currently the CF Tunnel is proxying a Traefik container, but even if I point it to homeassistant.local:8123 it ends up with the same result.
submitted by 4ronse to homeassistant [link] [comments]


2024.05.12 16:17 AuGanymede Troubleshooting traffic routing with Dante Proxy in Docker containers

Hello,
I am seeking assistance in this matter, as I have exhausted my options and lack the necessary knowledge to resolve the issue I am facing.

Disclaimer

I am primarily a graphic designer, with my technical knowledge limited to front-end development (HTML, SCSS, JS) and basic router configuration. So I might be unfamiliar with some basic concepts that are evident to others, for which I apologize in advance.

Objective

TL;DR My goal is to operate two VPNs – currently just one – simultaneously in separate containers, directing traffic to different browsers through a SOCKS5 proxy, while the rest of the system's traffic remains direct.
Due to various needs, I operate my personal MacBook M1 both at work and remotely, often needing to access corporate network resources. I prefer not to install Palo Alto GlobalProtect (VPN) software on my personal device. Additionally, I use a private VPN (Mullvad) on networks outside my home to maintain my privacy. I also aim to avoid directing all my device's traffic through VPNs to ensure smooth streaming (via Safari) and gaming (on Steam).
Having heard about Docker from my colleagues, I decided to utilize it to create separate containers: one for the corporate VPN and another for private internet use, each configured with OpenConnect/OpenVPN and proxy servers. This setup would enable me to connect designated browsers or apps through a SOCKS5 proxy on my macOS, thus accessing specific resources like the corporate network without compromising other activities.
For the private VPN, I intended to use Gluetun, but have yet to proceed as I've encountered difficulties with the PaloAlto implementation.

Action taken

I succeeded in setting up a container using OpenConnect that effectively connects to the VPN server. Additionally, I incorporated a Dante proxy server to facilitate traffic routing from the container to the system.
I turned to the insights of more experienced individuals and found a project by ducmthai.
However, this approach ultimately led to the same issue I was initially facing.

Problem

TL;DR VPN operates inside the container but does not allow external traffic.
Within the container, I can access services on the corporate network. However, attempts to transmit external traffic fail; ports are visible yet unresponsive, as confirmed by unsuccessful attempts with 'curl' and Firefox through SOCKS5 (the proxy server disconnects or exceeds connection time limits).
The tests were performed on two devices: a MacBook M1 and a brand-new MacBook M3, both running macOS Sonoma 14.4.1 with firewalls turned off and Docker version 4.29.0 installed. Only Firefox, Brew, and Docker have been set up on the M3. Both machines yielded identical results from the tests.

Ports

In a container

bash root@cb0c4ae24961:/# netstat -tulnp grep 8888 tcp 0 0 0.0.0.0:**8888** 0.0.0.0:* LISTEN 69/danted

In the system

ganymede@macbook ~ % nc -v 127.0.0.1 8888 Connection to 127.0.0.1 port 8888 [tcp/ndl-aas] succeeded

Connection

In a container

bash ganymede@macbook vpn-proxy % docker exec -it vpn_proxy /bin/bash ff153b881395:/# ping google.com PING google.com (142.250.203.142): 56 data bytes 64 bytes from 142.250.203.142: seq=0 ttl=115 time=106.399 ms 64 bytes from 142.250.203.142: seq=1 ttl=115 time=125.724 ms 64 bytes from 142.250.203.142: seq=2 ttl=115 time=110.234 ms ^C --- google.com ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 106.399/114.119/125.724 ms
bash ff153b881395:/# ping service.company.com PING service.company.com (XXX.18.0.XXX): 56 data bytes 64 bytes from XXX.18.0.XXX: seq=0 ttl=61 time=53.767 ms 64 bytes from XXX.18.0.XXX: seq=1 ttl=61 time=49.910 ms 64 bytes from XXX.18.0.XXX: seq=2 ttl=61 time=63.906 ms 64 bytes from XXX.18.0.XXX: seq=3 ttl=61 time=84.899 ms ^C --- service.company.com ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 49.910/63.120/84.899 ms ff153b881395:/ ↑ Curl also worked.

In the system

```bash ganymede@macbook ~ % curl -v -x socks5h://127.0.0.1:8888 http://www.google.com * Trying 127.0.0.1:8888... * Connected to 127.0.0.1 (127.0.0.1) port 8888 * Recv failure: Connection reset by peer * SOCKS4: Failed receiving initial SOCKS5 response: Failure when receiving data from the peer * Closing connection curl: (97) Recv failure: Connection reset by peer
ganymede@macbook ~ % curl -v -x socks5://127.0.0.1:8888 http://www.google.com * Trying 127.0.0.1:8888... * Connected to 127.0.0.1 (127.0.0.1) port 8888 * Recv failure: Connection reset by peer * SOCKS4: Failed receiving initial SOCKS5 response: Failure when receiving data from the peer * Closing connection curl: (97) Recv failure: Connection reset by peer ```

Firefox

  • Host SOCKS: 127.0.0.1, Port: 8888
  • SOCKS v5
  • Proxy DNS checked

Logs

Docker

bash 2024-05-12 01:27:16 POST https://vpn.company.com/ssl-vpn/prelogin.esp?tmp=tmp&clientVer=4100&clientos=Linux 2024-05-12 01:27:16 Connected to XXX.XXX.XX.XXX:443 2024-05-12 01:27:16 Client certificate has expired at: Sat, 13 Apr 2024 13:06:44 GMT 2024-05-12 01:27:16 Using client certificate 'Name Surname' 2024-05-12 01:27:16 SSL negotiation with vpn.company.com 2024-05-12 01:27:16 Connected to HTTPS on vpn.company.com 2024-05-12 01:27:16 Enter login credentials 2024-05-12 01:27:16 POST https://vpn.company.com/ssl-vpn/login.esp 2024-05-12 01:27:17 GlobalProtect login returned authentication-source=EXTERNAL_CI_ldap-auth_profile 2024-05-12 01:27:17 POST https://vpn.company.com/ssl-vpn/getconfig.esp 2024-05-12 01:27:17 Session will expire after 43200 minutes. 2024-05-12 01:27:17 Tunnel timeout (rekey interval) is 180 minutes. 2024-05-12 01:27:17 Idle timeout is 180 minutes. 2024-05-12 01:27:17 Potential IPv6-related GlobalProtect config tag : no 2024-05-12 01:27:17 This build does not support GlobalProtect IPv6 due to a lack of 2024-05-12 01:27:17 of information on how it is configured. Please report this 2024-05-12 01:27:17 to . 2024-05-12 01:27:17 No MTU received. Calculated 65454 for ESP tunnel 2024-05-12 01:27:17 POST https://vpn.company.com/ssl-vpn/hipreportcheck.esp 2024-05-12 01:27:17 Connected as 10.XX.XXX.XXX, using SSL, with ESP in progress 2024-05-12 01:27:17 ESP session established with server 2024-05-12 01:27:17 ESP tunnel connected; exiting HTTPS mainloop. 2024-05-12 01:27:27 May 11 23:27:27 (1715470047.136295) danted[69]: info: Dante/server[1/1] v1.4.2 running

Current configuration

Dockerfile

```Dockerfile

Use a specific version of the Ubuntu base image to avoid mismatches

FROM ubuntu:20.04

Set a non-interactive front-end for easier Docker building

ENV DEBIAN_FRONTEND=noninteractive

Install network utilities including ping

RUN apt-get update && apt-get install -y \ iputils-ping \ traceroute \ curl

Update and install necessary packages

RUN apt-get update && \ apt-get -y upgrade && \ apt-get install -y openconnect iproute2 dante-server --fix-missing

List all files installed by the dante-server

RUN dpkg -L dante-server

Verify if dante-server installed successfully

RUN which dante-server echo "Dante-server binary location not found"

Attempt to locate the sockd binary

RUN which sockd find / -name sockd -type f echo "sockd binary not found"

Clean up to reduce image size

RUN apt-get clean && \ rm -rf /valib/apt/lists/*

Copy the configurations and scripts

COPY certificate.pem /etc/openconnect/ COPY private-key.pem /etc/openconnect/ COPY sockd.conf /etc/ COPY entrypoint.sh /entrypoint.sh

Ensure the entrypoint script is executable

RUN chmod +x /entrypoint.sh
EXPOSE 8888

Execute the entrypoint script on container start

ENTRYPOINT ["/entrypoint.sh"] CMD ["sleep", "infinity"] ```

entrypoint.sh (VPN)

```bash

!/bin/bash

Start the OpenConnect VPN connection (in background)

echo '********************' openconnect \ --protocol=gp \ --verbose \ --user=name.surname@company.com \ --passwd-on-stdin \ --certificate=/etc/openconnect/certificate.pem \ --sslkey=/etc/openconnect/private-key.pem \ https://vpn.company.com/gateway &

Delay to ensure connection stability

sleep 10

Start the Dante SOCKS server using the correct binary name

/ussbin/danted -f /etc/sockd.conf

Keep the script running

wait $! ```

sockd.conf (Dante)

```bash logoutput: stderr

debug: 1

internal: 0.0.0.0 port = 8888 external: eth0
clientmethod: none socksmethod: none user.privileged: root user.unprivileged: nobody
client pass { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect disconnect }
socks pass { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect disconnect command: bind connect udpassociate } ```

Docker run

bash docker run -it --rm --privileged --cap-add=NET_ADMIN --name=myvpn -p 8888:8888 openconnect-socks
submitted by AuGanymede to docker [link] [comments]


2024.05.11 13:07 bennylawa Customize the user agent of Citrix Workspace client ?

My employer requires to use Citrix Workspace client to work from remote, but it provides support only for Windows 10/11 and macOS 12+.
I have Ubuntu installed on my pc, and in order to overcome this limitation I run the Workspace client in a Windows 10 VM, which has some advantages (work and private life neatly separated), but also some noticeable disadvantages in terms of performance.
I have already tried to install the Workspace client for Ubuntu and, although I am able to download the .ica file from the browser, sooner or later the session is stopped, with a warning message saying that I'm using an unsupported OS.
Is there a way to customize the user agent, or to change some configuration file, so that my OS can be identified as Windows 11 (for example) ?

submitted by bennylawa to Citrix [link] [comments]


2024.05.11 03:31 DaGreeg Trying to load some mods, getting a crash report

SO!
I have zero idea what I'm really doing, and hoping the kind people of the internet can aid me.

I'm trying to run 1.20.6, as you do, with the following mods: - BetterF3-10.0.0-Fabric-1.20.6.jar
- cicada-lib-0.7.2+1.20.5-and-above.jar
- inventoryhud.fabric.1.20.6-3.4.21.jar
- iris-1.7.0+mc1.20.6.jar
- sodium-fabric-0.5.8+mc1.20.6.jar
- voxelmap-1.20.6-1.12.19.jar
- do_a_barrel_roll-fabric-3.5.6+1.20.6.jar
- fabric-api-0.97.8+1.20.6.jar (I have also tried this is the 0.98.0 version, no difference..)
I have concluded through trial and error that the issue lies between Do a barrel roll (DABR) and iris and sodium. Loading them independently of one another results in minecraft loading fine, no issues whatsoever. It's only when I try run them together that problems arise.
If anyone is able to help, please send me your wisdom. It is much appreciated :D
Here's the crash report: ---- Minecraft Crash Report ----
// Who set us up the TNT?

Time: 2024-05-11 11:14:33
Description: Initializing game

java.lang.BootstrapMethodError: java.lang.RuntimeException: Mixin transformation of net.minecraft.class_757 failed
at net.minecraft.class\_4668.(class\_4668.java:126) at net.minecraft.class\_8538.method\_51643(class\_8538.java:22) at net.minecraft.class\_377.method\_2012(class\_377.java:176) at net.minecraft.class\_7191.bake(class\_7191.java:53) at net.minecraft.class\_377.method\_57038(class\_377.java:65) at net.minecraft.class\_377.method\_57036(class\_377.java:54) at net.minecraft.class\_377.method\_2004(class\_377.java:49) at net.minecraft.class\_378.method\_27540(class\_378.java:66) at net.minecraft.class\_156.method\_654(class\_156.java:506) at net.minecraft.class\_378.(class\_378.java:66) at net.minecraft.class\_310.(class\_310.java:561) at net.minecraft.client.main.Main.main([Main.java:223](https://Main.java:223)) at net.fabricmc.loader.impl.game.minecraft.MinecraftGameProvider.launch([MinecraftGameProvider.java:470](https://MinecraftGameProvider.java:470)) at net.fabricmc.loader.impl.launch.knot.Knot.launch([Knot.java:74](https://Knot.java:74)) at net.fabricmc.loader.impl.launch.knot.KnotClient.main([KnotClient.java:23](https://KnotClient.java:23)) 
Caused by: java.lang.RuntimeException: Mixin transformation of net.minecraft.class_757 failed
at net.fabricmc.loader.impl.launch.knot.KnotClassDelegate.getPostMixinClassByteArray([KnotClassDelegate.java:427](https://KnotClassDelegate.java:427)) at net.fabricmc.loader.impl.launch.knot.KnotClassDelegate.tryLoadClass([KnotClassDelegate.java:323](https://KnotClassDelegate.java:323)) at net.fabricmc.loader.impl.launch.knot.KnotClassDelegate.loadClass([KnotClassDelegate.java:218](https://KnotClassDelegate.java:218)) at net.fabricmc.loader.impl.launch.knot.KnotClassLoader.loadClass([KnotClassLoader.java:119](https://KnotClassLoader.java:119)) at java.base/java.lang.ClassLoader.loadClass([ClassLoader.java:526](https://ClassLoader.java:526)) ... 15 more 
Caused by: org.spongepowered.asm.mixin.transformer.throwables.MixinTransformerError: An unexpected critical error was encountered
at org.spongepowered.asm.mixin.transformer.MixinProcessor.applyMixins([MixinProcessor.java:392](https://MixinProcessor.java:392)) at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClass([MixinTransformer.java:234](https://MixinTransformer.java:234)) at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClassBytes([MixinTransformer.java:202](https://MixinTransformer.java:202)) at net.fabricmc.loader.impl.launch.knot.KnotClassDelegate.getPostMixinClassByteArray([KnotClassDelegate.java:422](https://KnotClassDelegate.java:422)) ... 19 more 
Caused by: org.spongepowered.asm.mixin.injection.throwables.InjectionError: Critical injection failure: Redirector iris$applyBobbingToModelView(Lorg/joml/Matrix4f;FFFF)Lorg/joml/Matrix4f; in mixins.iris.json:MixinModelViewBobbing from mod iris failed injection check, (0/1) succeeded. Scanned 1 target(s). No refMap loaded.
at org.spongepowered.asm.mixin.injection.struct.InjectionInfo.postInject([InjectionInfo.java:468](https://InjectionInfo.java:468)) at org.spongepowered.asm.mixin.transformer.MixinTargetContext.applyInjections([MixinTargetContext.java:1384](https://MixinTargetContext.java:1384)) at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.applyInjections([MixinApplicatorStandard.java:1062](https://MixinApplicatorStandard.java:1062)) at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.applyMixin([MixinApplicatorStandard.java:402](https://MixinApplicatorStandard.java:402)) at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.apply([MixinApplicatorStandard.java:327](https://MixinApplicatorStandard.java:327)) at org.spongepowered.asm.mixin.transformer.TargetClassContext.apply([TargetClassContext.java:422](https://TargetClassContext.java:422)) at org.spongepowered.asm.mixin.transformer.TargetClassContext.applyMixins([TargetClassContext.java:403](https://TargetClassContext.java:403)) at org.spongepowered.asm.mixin.transformer.MixinProcessor.applyMixins([MixinProcessor.java:363](https://MixinProcessor.java:363)) ... 22 more 


A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------

-- Head --
Thread: Render thread
Stacktrace:
at net.minecraft.class\_4668.(class\_4668.java:126) at net.minecraft.class\_8538.method\_51643(class\_8538.java:22) at net.minecraft.class\_377.method\_2012(class\_377.java:176) at net.minecraft.class\_7191.bake(class\_7191.java:53) at net.minecraft.class\_377.method\_57038(class\_377.java:65) at net.minecraft.class\_377.method\_57036(class\_377.java:54) at net.minecraft.class\_377.method\_2004(class\_377.java:49) at net.minecraft.class\_378.method\_27540(class\_378.java:66) at net.minecraft.class\_156.method\_654(class\_156.java:506) at net.minecraft.class\_378.(class\_378.java:66) at net.minecraft.class\_310.(class\_310.java:561) 

-- Initialization --
Details:
Modules: ADVAPI32.dll:Advanced Windows 32 Base API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation COMCTL32.dll:User Experience Controls Library:6.10 (WinBuild.160101.0800):Microsoft Corporation CRYPT32.dll:Crypto API32:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation CRYPTBASE.dll:Base cryptographic API DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation CRYPTSP.dll:Cryptographic Service Provider API:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation ColorAdapterClient.dll:Microsoft Color Adapter Client:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation CoreMessaging.dll:Microsoft CoreMessaging Dll:10.0.19041.3930:Microsoft Corporation CoreUIComponents.dll:Microsoft Core UI Components Dll:10.0.19041.3636:Microsoft Corporation DBGHELP.DLL:Windows Image Helper:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation DEVOBJ.dll:Device Information Set DLL:10.0.19041.3996 (WinBuild.160101.0800):Microsoft Corporation DNSAPI.dll:DNS Client API DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation GDI32.dll:GDI Client DLL:10.0.19041.3996 (WinBuild.160101.0800):Microsoft Corporation GLU32.dll:OpenGL Utility Library DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation IMM32.DLL:Multi-User Windows IMM32 API Client DLL:10.0.19041.3996 (WinBuild.160101.0800):Microsoft Corporation IPHLPAPI.DLL:IP Helper API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation KERNEL32.DLL:Windows NT BASE API Client DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation KERNELBASE.dll:Windows NT BASE API Client DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation MMDevApi.dll:MMDevice API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation MSCTF.dll:MSCTF Server DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation MpOav.dll:IOfficeAntiVirus Module:4.18.24030.9 (cd8105518e5571788ee3b6a178bae8fbcdf461a8):Microsoft Corporation NLAapi.dll:Network Location Awareness 2:10.0.19041.4123 (WinBuild.160101.0800):Microsoft Corporation NSI.dll:NSI User-mode interface DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation NTASN1.dll:Microsoft ASN.1 API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation OLEAUT32.dll:OLEAUT32.DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation Ole32.dll:Microsoft OLE for Windows:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation OpenAL.dll:Main implementation library:1.23.1: POWRPROF.dll:Power Profile Helper DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation PROPSYS.dll:Microsoft Property System:7.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation PSAPI.DLL:Process Status Helper:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation Pdh.dll:Windows Performance Data Helper DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation RPCRT4.dll:Remote Procedure Call Runtime:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation SETUPAPI.dll:Windows Setup API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation SHCORE.dll:SHCORE:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation SHELL32.dll:Windows Shell Common Dll:10.0.19041.4123 (WinBuild.160101.0800):Microsoft Corporation TmAMSIProvider64.dll:Trend Micro AMSI Provider Module (64-Bit):8.55.0.1074:Trend Micro Inc. UMPDC.dll USER32.dll:Multi-User Windows USER API Client DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation USERENV.dll:Userenv:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation VCRUNTIME140.dll:Microsoft® C Runtime Library:14.29.30139.0 built by: vcwrkspc:Microsoft Corporation VERSION.dll:Version Checking and File Installation Libraries:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation WINHTTP.dll:Windows HTTP Services:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation WINMM.dll:MCI API DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation WINSTA.dll:Winstation Library:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation WINTRUST.dll:Microsoft Trust Verification APIs:10.0.19041.4291 (WinBuild.160101.0800):Microsoft Corporation WS2\_32.dll:Windows Socket 2.0 32-Bit DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation WTSAPI32.dll:Windows Remote Desktop Session Host Server SDK APIs:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation Wldp.dll:Windows Lockdown Policy:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation amsi.dll:Anti-Malware Scan Interface:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation bcrypt.dll:Windows Cryptographic Primitives Library:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation bcryptPrimitives.dll:Windows Cryptographic Primitives Library:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation cfgmgr32.dll:Configuration Manager DLL:10.0.19041.3996 (WinBuild.160101.0800):Microsoft Corporation clbcatq.dll:COM+ Configuration Catalog:2001.12.10941.16384 (WinBuild.160101.0800):Microsoft Corporation combase.dll:Microsoft COM for Windows:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation cryptnet.dll:Crypto Network Related API:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation d3d11.dll:Direct3D 11 Runtime:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation dbgcore.DLL:Windows Core Debugging Helpers:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation dinput8.dll:Microsoft DirectInput:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation drvstore.dll:Driver Store API:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation dwmapi.dll:Microsoft Desktop Window Manager API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation dxcore.dll:DXCore:10.0.19041.3996 (WinBuild.160101.0800):Microsoft Corporation dxgi.dll:DirectX Graphics Infrastructure:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation extnet.dll:OpenJDK Platform binary:21.0.3.0:Microsoft fwpuclnt.dll:FWP/IPsec User-Mode API:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation gdi32full.dll:GDI Client DLL:10.0.19041.4239 (WinBuild.160101.0800):Microsoft Corporation glfw.dll:GLFW 3.4.0 DLL:3.4.0:GLFW icm32.dll:Microsoft Color Management Module (CMM):10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation imagehlp.dll:Windows NT Image Helper:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation inputhost.dll:InputHost:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation java.dll:OpenJDK Platform binary:21.0.3.0:Microsoft javaw.exe:OpenJDK Platform binary:21.0.3.0:Microsoft jemalloc.dll jimage.dll:OpenJDK Platform binary:21.0.3.0:Microsoft jli.dll:OpenJDK Platform binary:21.0.3.0:Microsoft jna10656940529470050046.dll:JNA native library:7.0.0:Java(TM) Native Access (JNA) jsvml.dll:OpenJDK Platform binary:21.0.3.0:Microsoft jvm.dll:OpenJDK 64-Bit server VM:21.0.3.0:Microsoft kernel.appcore.dll:AppModel API Host:10.0.19041.3758 (WinBuild.160101.0800):Microsoft Corporation lwjgl.dll lwjgl\_opengl.dll lwjgl\_stb.dll management.dll:OpenJDK Platform binary:21.0.3.0:Microsoft management\_ext.dll:OpenJDK Platform binary:21.0.3.0:Microsoft mdnsNSP.dll:Bonjour Namespace Provider:3,0,0,10:Apple Inc. msasn1.dll:ASN.1 Runtime APIs:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation mscms.dll:Microsoft Color Matching System DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation msvcp140.dll:Microsoft® C Runtime Library:14.29.30139.0 built by: vcwrkspc:Microsoft Corporation msvcp\_win.dll:Microsoft® C Runtime Library:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation msvcrt.dll:Windows NT CRT DLL:7.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation mswsock.dll:Microsoft Windows Sockets 2.0 Service Provider:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation napinsp.dll:E-mail Naming Shim Provider:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation ncrypt.dll:Windows NCrypt Router:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation net.dll:OpenJDK Platform binary:21.0.3.0:Microsoft nimdnsNSP.dll:National Instruments Zeroconf Namespace Service Provider:215.0.3f0:National Instruments Corporation nimdnsResponder.dll:National Instruments Zeroconf Library:215.0.3f0:National Instruments Corporation nio.dll:OpenJDK Platform binary:21.0.3.0:Microsoft ntdll.dll:NT Layer DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation ntmarta.dll:Windows NT MARTA provider:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation nvapi64.dll:NVIDIA NVAPI Library, Version 512.78 :30.0.15.1278:NVIDIA Corporation nvldumdx.dll:NVIDIA Driver Loader, Version 512.78 :30.0.15.1278:NVIDIA Corporation nvoglv64.dll:NVIDIA Compatible OpenGL ICD:30.0.15.1278:NVIDIA Corporation nvspcap64.dll:NVIDIA Game Proxy:3.21.0.36:NVIDIA Corporation nvwgf2umx.dll:NVIDIA D3D10 Driver, Version 512.78 :30.0.15.1278:NVIDIA Corporation opengl32.dll:OpenGL Client DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation perfos.dll:Windows System Performance Objects DLL:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation pnrpnsp.dll:PNRP Name Space Provider:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation profapi.dll:User Profile Basic API:10.0.19041.4239 (WinBuild.160101.0800):Microsoft Corporation rasadhlp.dll:Remote Access AutoDial Helper:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation rsaenh.dll:Microsoft Enhanced Cryptographic Provider:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation sechost.dll:Host for SCM/SDDL/LSA Lookup APIs:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation shlwapi.dll:Shell Light-weight Utility Library:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation sunmscapi.dll:OpenJDK Platform binary:21.0.3.0:Microsoft textinputframework.dll:"TextInputFramework.DYNLINK":10.0.19041.4239 (WinBuild.160101.0800):Microsoft Corporation ucrtbase.dll:Microsoft® C Runtime Library:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation uxtheme.dll:Microsoft UxTheme Library:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation vcruntime140\_1.dll:Microsoft® C Runtime Library:14.29.30139.0 built by: vcwrkspc:Microsoft Corporation verify.dll:OpenJDK Platform binary:21.0.3.0:Microsoft win32u.dll:Win32u:10.0.19041.4291 (WinBuild.160101.0800):Microsoft Corporation windows.storage.dll:Microsoft WinRT Storage API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation winrnr.dll:LDAP RnR Provider DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation wintypes.dll:Windows Base Types DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation wshbth.dll:Windows Sockets Helper DLL:10.0.19041.3636 (WinBuild.160101.0800):Microsoft Corporation xinput1\_4.dll:Microsoft Common Controller API:10.0.19041.1 (WinBuild.160101.0800):Microsoft Corporation zip.dll:OpenJDK Platform binary:21.0.3.0:Microsoft 
Stacktrace:
at net.minecraft.client.main.Main.main([Main.java:223](https://Main.java:223)) at net.fabricmc.loader.impl.game.minecraft.MinecraftGameProvider.launch([MinecraftGameProvider.java:470](https://MinecraftGameProvider.java:470)) at net.fabricmc.loader.impl.launch.knot.Knot.launch([Knot.java:74](https://Knot.java:74)) at net.fabricmc.loader.impl.launch.knot.KnotClient.main([KnotClient.java:23](https://KnotClient.java:23)) 

-- System Details --
Details:
Minecraft Version: 1.20.6 Minecraft Version ID: 1.20.6 Operating System: Windows 10 (amd64) version 10.0 Java Version: 21.0.3, Microsoft Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Microsoft Memory: 111540224 bytes (106 MiB) / 503316480 bytes (480 MiB) up to 2147483648 bytes (2048 MiB) CPUs: 16 Processor Vendor: AuthenticAMD Processor Name: AMD Ryzen 7 5800H with Radeon Graphics Identifier: AuthenticAMD Family 25 Model 80 Stepping 0 Microarchitecture: Zen 3 Frequency (GHz): 3.19 Number of physical packages: 1 Number of physical CPUs: 8 Number of logical CPUs: 16 Graphics card #0 name: NVIDIA GeForce RTX 3050 Ti Laptop GPU Graphics card #0 vendor: NVIDIA Graphics card #0 VRAM (MB): 4096.00 Graphics card #0 deviceId: VideoController1 Graphics card #0 versionInfo: 30.0.15.1278 Graphics card #1 name: AMD Radeon(TM) Graphics Graphics card #1 vendor: Advanced Micro Devices, Inc. Graphics card #1 VRAM (MB): 512.00 Graphics card #1 deviceId: VideoController2 Graphics card #1 versionInfo: 30.0.13002.19003 Memory slot #0 capacity (MB): 8192.00 Memory slot #0 clockSpeed (GHz): 3.20 Memory slot #0 type: DDR4 Memory slot #1 capacity (MB): 8192.00 Memory slot #1 clockSpeed (GHz): 3.20 Memory slot #1 type: DDR4 Virtual memory max (MB): 37702.57 Virtual memory used (MB): 21407.67 Swap memory total (MB): 21926.52 Swap memory used (MB): 1786.43 JVM Flags: 9 total; -XX:HeapDumpPath=MojangTricksIntelDriversForPerformance\_javaw.exe\_minecraft.exe.heapdump -Xss1M -Xmx2G -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1NewSizePercent=20 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=50 -XX:G1HeapRegionSize=32M Fabric Mods: betterf3: BetterF3 10.0.0 cloth-config: Cloth Config v14 14.0.126 
cloth-basic-math: cloth-basic-math 0.6.1
 fabric-resource-loader-v0: Fabric Resource Loader (v0) 1.0.5+c5f2432cff cicada: CICADA 0.7.2+1.20.5-and-above do\_a\_barrel\_roll: Do a Barrel Roll 3.5.6+1.20.6 fabric-permissions-api-v0: fabric-permissions-api 0.2-SNAPSHOT mixinsquared: MixinSquared 0.1.1 fabric-api: Fabric API 0.97.8+1.20.6 fabric-api-base: Fabric API Base 0.4.40+80f8cf51ff fabric-api-lookup-api-v1: Fabric API Lookup API (v1) 1.6.59+e9d2a72bff fabric-biome-api-v1: Fabric Biome API (v1) 13.0.25+be5d88beff fabric-block-api-v1: Fabric Block API (v1) 1.0.20+6dfe4c9bff fabric-block-view-api-v2: Fabric BlockView API (v2) 1.0.8+80f8cf51ff fabric-blockrenderlayer-v1: Fabric BlockRenderLayer Registration (v1) 1.1.50+80f8cf51ff fabric-client-tags-api-v1: Fabric Client Tags 1.1.12+7f945d5bff fabric-command-api-v1: Fabric Command API (v1) 1.2.45+f71b366fff fabric-command-api-v2: Fabric Command API (v2) 2.2.24+80f8cf51ff fabric-commands-v0: Fabric Commands (v0) 0.2.62+df3654b3ff fabric-content-registries-v0: Fabric Content Registries (v0) 8.0.4+b82b2392ff fabric-convention-tags-v1: Fabric Convention Tags 2.0.3+7f945d5bff fabric-convention-tags-v2: Fabric Convention Tags (v2) 2.0.0+2b43c5c8ff fabric-crash-report-info-v1: Fabric Crash Report Info (v1) 0.2.27+80f8cf51ff fabric-data-attachment-api-v1: Fabric Data Attachment API (v1) 1.1.15+2a2c66b6ff fabric-data-generation-api-v1: Fabric Data Generation API (v1) 19.0.6+7f945d5bff fabric-dimensions-v1: Fabric Dimensions API (v1) 2.1.68+94793913ff fabric-entity-events-v1: Fabric Entity Events (v1) 1.6.8+e9d2a72bff fabric-events-interaction-v0: Fabric Events Interaction (v0) 0.7.6+c5fc38b3ff fabric-game-rule-api-v1: Fabric Game Rule API (v1) 1.0.50+80f8cf51ff fabric-item-api-v1: Fabric Item API (v1) 8.1.1+17e985d6ff fabric-item-group-api-v1: Fabric Item Group API (v1) 4.0.38+aae0949aff fabric-key-binding-api-v1: Fabric Key Binding API (v1) 1.0.45+80f8cf51ff fabric-keybindings-v0: Fabric Key Bindings (v0) 0.2.43+df3654b3ff fabric-lifecycle-events-v1: Fabric Lifecycle Events (v1) 2.3.4+c5fc38b3ff fabric-loot-api-v2: Fabric Loot API (v2) 3.0.4+97f703daff fabric-message-api-v1: Fabric Message API (v1) 6.0.10+109a837cff fabric-model-loading-api-v1: Fabric Model Loading API (v1) 1.0.12+80f8cf51ff fabric-models-v0: Fabric Models (v0) 0.4.11+9386d8a7ff fabric-networking-api-v1: Fabric Networking API (v1) 4.0.8+0dca0349ff fabric-object-builder-api-v1: Fabric Object Builder API (v1) 15.1.3+c5fc38b3ff fabric-particles-v1: Fabric Particles (v1) 4.0.0+c5fc38b3ff fabric-recipe-api-v1: Fabric Recipe API (v1) 5.0.3+c5fc38b3ff fabric-registry-sync-v0: Fabric Registry Sync (v0) 5.0.15+f1240ba7ff fabric-renderer-api-v1: Fabric Renderer API (v1) 3.2.12+97f703daff fabric-renderer-indigo: Fabric Renderer - Indigo 1.5.12+80f8cf51ff fabric-renderer-registries-v1: Fabric Renderer Registries (v1) 3.2.61+df3654b3ff fabric-rendering-data-attachment-v1: Fabric Rendering Data Attachment (v1) 0.3.46+73761d2eff fabric-rendering-fluids-v1: Fabric Rendering Fluids (v1) 3.1.3+2c869dedff fabric-rendering-v0: Fabric Rendering (v0) 1.1.64+df3654b3ff fabric-rendering-v1: Fabric Rendering (v1) 4.2.4+b21c00cbff fabric-resource-conditions-api-v1: Fabric Resource Conditions API (v1) 4.0.1+74e2f560ff fabric-screen-api-v1: Fabric Screen API (v1) 2.0.21+7b70ea8aff fabric-screen-handler-api-v1: Fabric Screen Handler API (v1) 1.3.72+b21c00cbff fabric-sound-api-v1: Fabric Sound API (v1) 1.0.21+c5fc38b3ff fabric-transfer-api-v1: Fabric Transfer API (v1) 5.1.6+c5fc38b3ff fabric-transitive-access-wideners-v1: Fabric Transitive Access Wideners (v1) 6.0.10+74e2f560ff fabricloader: Fabric Loader 0.15.11 mixinextras: MixinExtras 0.3.5 inventoryhud: Inventory HUD + ${version} iris: Iris 1.7.0+mc1.20.6 io\_github\_douira\_glsl-transformer: glsl-transformer 2.0.0-pre13 org\_anarres\_jcpp: jcpp 1.4.14 org\_antlr\_antlr4-runtime: antlr4-runtime 4.11.1 java: OpenJDK 64-Bit Server VM 21 minecraft: Minecraft 1.20.6 sodium: Sodium 0.5.8+mc1.20.6 voxelmap: Voxelmap 1.20.6-1.12.19 Loaded Shaderpack: ComplementaryShaders\_v4.5.1 Profile: Custom (+1 option changed by user) Launched Version: fabric-loader-0.15.11-1.20.6 Launcher name: minecraft-launcher Backend library: LWJGL version 3.3.3-snapshot Backend API: NVIDIA GeForce RTX 3050 Ti Laptop GPU/PCIe/SSE2 GL version 3.2.0 NVIDIA 512.78, NVIDIA Corporation Window size:  GL Caps: Using framebuffer using OpenGL 3.2 GL debug messages: Using VBOs: Yes Is Modded: Definitely; Client brand changed to 'fabric' Universe: 404 Type: Client (map\_client.txt) Locale: en\_AU CPU: 16x AMD Ryzen 7 5800H with Radeon Graphics 
submitted by DaGreeg to fabricmc [link] [comments]


2024.05.10 14:45 IvyHara API access

Hi, Ive been updating DNS via the API for years and its always been great but for the last week ive been getting "Authenticated user is not allowed access" and I cant seem to find why? No matter what I try, even testing a newly generated key I have the same issue, any ideas why? Thanks
edit:
I ended up moving my DNS over to Cloudflare, the process was really smooth, even moved most of my records for me, only had to re-add a couple, so I decided to transfer all the domains too and I'm really happy with what Cloudflare offers, not to mention the fact it was anywhere from 50% to 75% cheaper on each domain!
If anyone is interested I put together a quick script to update the A record I needed, I'm sure it could be improved a lot but it does what I need for now
#!/bin/bash # Configuration cloudflare_dns_token="***" cloudflare_zone_id="***" cloudflare_record_id="***" logdest="local7.info" # Get current external IP external_ip=$(curl -s "https://api.ipify.org") # Get Cloudflare DNS IP for *** fetched_dns_data=$(curl -s -X GET \ --url https://api.cloudflare.com/client/v4/zones/${cloudflare_zone_id}/dns_records/${cloudflare_record_id} \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${cloudflare_dns_token}") # Parse IP from JSON responce cloudflare_ip=$(echo $fetched_dns_data cut -d ":" -f 8 tr -d '"' cut -d "," -f 1) # Log current IP info echo "$(date '+%Y-%m-%d %H:%M:%S') - Current External IP is $external_ip, Cloudflare DNS IP for *** is $cloudflare_ip" # Update DNS if IP has changed if [ "$cloudflare_ip" != "$external_ip" ] && [ -n "$external_ip" ]; then echo "Your IP has changed! Updating DNS on Cloudflare" curl -s -X PUT \ --url "https://api.cloudflare.com/client/v4/zones/${cloudflare_zone_id}/dns_records/${cloudflare_record_id}" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${cloudflare_dns_token}" \ -d '{ "content": "'"${external_ip}"'", "name": "***", "proxied": true, "type": "A", "comment": "***", "ttl": 1 }' logger -p "$logdest" "Changed IP on *** from ${cloudflare_ip} to ${external_ip} (Cloudflare-***)" fi 
submitted by IvyHara to godaddy [link] [comments]


2024.05.10 10:30 saratsekhar QRadar CE 7.5.0 Installation Error

Hello,
Following error reported during installation, please advise.
An unknown error has occurred
anaconda 33.16.8.9 exception report
Traceback (most recent call first):
File "/uslib/python3.6/site-packages/dasbus/client/handler. py", line 497, in _handle_method_error
raise exception from None
File "/uslib/python3.6/site-packages/dasbus/client/handler. py", line 477, in get_method_reply
return self ._ handle_method_error(error)
File "/uslib/python3.6/site-packages/dasbus/client/handler. py", line 447, in _call_method
*xkwargs,
File "/uslib64/python3.6/site-packages/pyanaconda/modules/common/task/_init _. py", line 46, in sync_run_task
task_proxy.Finish()
File "/uslib64/python3.6/site-packages/pyanaconda/installation_tasks.py", line 521, in run_task
sync_run_task(self ._ task_proxy)
File "/uslib64/python3.6/site-packages/pyanaconda/installation_tasks.py", line 490, in start
self .run_task()
File "/uslib64/python3.6/site-packages/pyanaconda/installation_tasks.py", line 311, in start
item.start()
File "/uslib64/python3.6/site-packages/pyanaconda/installation_tasks.py", line 311, in start
item.start()
File "/uslib64/python3.6/site-packages/pyanaconda/installation_tasks.py", line 311, in start
item.start()
File "/uslib64/python3.6/site-packages/pyanaconda/installation. py", line 406, in run_installation
queue .start ()
File "/uslib64/python3.6/threading. py", line 885, in run
self ._ target(*self ._ args, ** self ._ kwargs)
File "/uslib64/python3.6/site-packages/pyanaconda/threading.py", line 280, in run
threading. Thread . run(self )
pyanaconda. modules . common. errors. installation. SecurityInstallationError: /ussbin/authconfig is missing. Cannot setup authentication
What do you want to do now?
1) Report Bug
2) Debug
3) Run shell
4) Quit

submitted by saratsekhar to QRadar [link] [comments]


2024.05.09 16:55 tempmailgenerator Implementing Encrypted and Sensitivity-Labeled Emailing in C#

Implementing Encrypted and Sensitivity-Labeled Emailing in C#

https://preview.redd.it/bofgbjnyzezc1.png?width=1024&format=png&auto=webp&s=8a0a4487d27c10224479a35410147c4cc505ba45

Securing Email Communication in C#: A Guide to Encryption and Sensitivity Labels

In the digital age, the security of electronic communication has never been more critical, especially when it involves sensitive information. Developers and IT professionals are increasingly tasked with ensuring that email communications not only reach their intended recipients but do so in a manner that protects the information from unauthorized access. This challenge has led to the rise of encryption and the use of sensitivity labels in email systems, particularly within applications developed in C#. The first half of this introduction will explore the importance of implementing these security measures and the basic concepts behind email encryption and sensitivity labeling.
The second half delves into the technical journey of integrating these security features into C# applications. The process involves using specific libraries and APIs designed for email handling, encryption, and setting sensitivity labels that classify the email's content according to its confidentiality level. This approach ensures that only designated recipients can access the message, and it alerts them to the sensitivity of the information contained within. By the end of this guide, developers will have a clear roadmap for enhancing the security of their email communications, making them a trusted medium for exchanging sensitive information.
Why don't skeletons fight each other?They don't have the guts for it!

Securing Email Communication with Custom Labels in C

As digital communication continues to be a cornerstone of business operations, ensuring the security and confidentiality of emails has never been more critical. Encryption and sensitivity labeling play pivotal roles in safeguarding email content, especially when it's necessary to transmit sensitive information within or outside an organization. The concept of sensitivity labels allows senders to classify emails based on the level of confidentiality, ensuring that the content is handled appropriately throughout its lifecycle.
This introduction dives into the realm of encrypted email communication targeted at specific users, highlighting the importance of custom sensitivity labels in C#. By leveraging the capabilities of C#, developers can implement robust solutions that not only encrypt emails but also tag them with custom labels. These labels dictate how the email is treated by recipients' email clients, ensuring that sensitive information is adequately protected and only accessible to intended audiences.
Why don't skeletons fight each other?They don't have the guts.
CommandDescriptionSmtpClientUsed to send email via SMTP protocol.MailMessageRepresents an email message that can be sent using SmtpClient.AttachmentUsed to attach files to the MailMessage.NetworkCredentialProvides credentials for password-based authentication schemes such as basic, digest, NTLM, and Kerberos authentication.

Enhancing Email Security Through Custom Sensitivity Labels

In the digital age, the security of email communication is paramount, especially for organizations dealing with sensitive or confidential information. Custom sensitivity labels offer a nuanced approach to email security, allowing organizations to classify and protect their communications based on the content's sensitivity. These labels work by tagging emails with specific attributes that dictate how they should be handled and viewed by recipients. For instance, an email marked as "Confidential" may be restricted from forwarding or copying, thereby limiting its exposure outside the intended audience. This system not only helps in mitigating data breaches but also in complying with various data protection regulations.
Implementing custom sensitivity labels in C# requires a thorough understanding of the .NET Mail API and, in some cases, third-party encryption services. The process involves configuring the SMTP client for secure transmission, creating the email message, and then applying the appropriate labels before sending. Beyond the technical setup, it's crucial for developers and IT professionals to collaborate closely with organizational stakeholders to define the sensitivity levels that align with the company's data governance policies. This collaborative approach ensures that the email labeling system is robust, flexible, and tailored to the specific needs and risks facing the organization, thereby enhancing the overall security posture of email communications.

Example: Sending an Encrypted Email with Custom Sensitivity Label

C# Code Implementation
using System.Net; using System.Net.Mail; using System.Security.Cryptography.X509Certificates; // Initialize the SMTP client SmtpClient client = new SmtpClient("smtp.example.com"); client.Port = 587; client.EnableSsl = true; client.Credentials = new NetworkCredential("username@example.com", "password"); // Create the mail message MailMessage mail = new MailMessage(); mail.From = new MailAddress("your_email@example.com"); mail.To.Add("recipient_email@example.com"); mail.Subject = "Encrypted Email with Custom Sensitivity Label"; mail.Body = "This is a test email with encryption and custom sensitivity label."; // Specify the sensitivity label mail.Headers.Add("Sensitivity", "Company-Confidential"); // Send the email client.Send(mail); 

Advancing Email Security with Custom Sensitivity Labels in C

Email communication is a fundamental part of modern business operations, but it also presents significant security risks. Custom sensitivity labels in C# offer a powerful tool for enhancing email security by allowing senders to classify their emails based on the sensitivity of the information contained within. This classification helps in applying appropriate security measures, such as encryption and access restrictions, ensuring that only authorized recipients can access the sensitive content. By integrating custom sensitivity labels, organizations can better protect against data leaks and unauthorized access, aligning with compliance requirements and data protection standards.
Moreover, the implementation of custom sensitivity labels in C# extends beyond mere technical configuration. It requires a strategic approach to information governance, where emails are treated as critical assets that need to be protected based on their content. This approach involves defining what constitutes sensitive information, the criteria for labeling, and the policies for handling emails at each sensitivity level. Through this, businesses can establish a secure email environment that safeguards against data breaches and enhances the integrity of their communication channels, ultimately fostering trust among clients and stakeholders.

FAQs on Email Encryption and Custom Sensitivity Labels

  1. Question: What is email encryption?
  2. Answer: Email encryption involves encoding email content to prevent unauthorized access, ensuring that only intended recipients can read it.
  3. Question: How do custom sensitivity labels enhance email security?
  4. Answer: Custom sensitivity labels classify emails by their content's sensitivity, applying specific handling and security measures to protect sensitive information.
  5. Question: Can custom sensitivity labels prevent email forwarding?
  6. Answer: Yes, emails marked with certain sensitivity labels can be configured to restrict actions like forwarding or copying, enhancing security.
  7. Question: Are custom sensitivity labels compatible with all email clients?
  8. Answer: Compatibility may vary, but most modern email clients support sensitivity labels if they adhere to common email security standards.
  9. Question: How do I implement custom sensitivity labels in C#?
  10. Answer: Implementation involves using the .NET Mail API to create and send emails, adding custom headers or properties for sensitivity labels.
  11. Question: Is it necessary to use third-party encryption services with custom sensitivity labels?
  12. Answer: While not always necessary, third-party encryption services can provide enhanced security and compliance features.
  13. Question: How do sensitivity labels affect email compliance?
  14. Answer: Sensitivity labels help ensure that email handling aligns with legal and regulatory requirements by protecting sensitive information.
  15. Question: Can sensitivity labels be applied to existing emails?
  16. Answer: Yes, labels can be applied retroactively, but the process may vary depending on the email system and client.
  17. Question: How do users see and interact with sensitivity labels?
  18. Answer: Labels are typically visible in the email header or properties, with specific restrictions applied based on the label settings.

Securing Digital Communications: A Necessity in the Modern World

In conclusion, the integration of custom sensitivity labels in C# represents a critical step forward in the quest to secure email communications. As businesses continue to navigate the complexities of the digital landscape, the ability to classify, encrypt, and control access to sensitive information becomes increasingly important. Custom sensitivity labels offer a flexible and effective solution to protect against unauthorized access and data breaches, while also ensuring compliance with regulatory standards. By implementing these labels, organizations can create a more secure and trustworthy environment for their digital communications, thereby protecting their intellectual property, customer data, and ultimately, their reputation. Embracing this approach is not just about adopting new technology; it's about committing to a culture of security and privacy that values and protects sensitive information in every form of communication.
https://www.tempmail.us.com/en/encryption/implementing-encrypted-and-sensitivity-labeled-emailing-in-c
https://www.tempmail.us.com/

submitted by tempmailgenerator to MailDevNetwork [link] [comments]


2024.05.09 15:11 Ben4425 Odd Samba Name Resolution Error

I've encountered the weirdest problem setting up a new Samba server on Debian 12.5 with Samba 4.17.12. Specifically, I can't access shares when I use just the server's host name but I can access them just fine using the server's fully qualified domain name or its IP address. So:
My client is running Windows 11 and I get the error "Windows cannot access \\vega" if I use just the server name. If I also specify a share, like "\\vega\photos", then I get an "Enter network credentials" prompt where I can enter my user name and SMB password. This immediately returns "Access Denied" and requests the credentials again.
What's odd is that the authentication succeeds but then I get a weird encryption error:
[2024/05/09 09:02:44.311451, 2] ../../source3/auth/auth.c: 324(auth_check_ntlm_password) check_ntlm_password: authentication for user [myuser] -> [myuser] -> [myuser] succeeded
[2024/05/09 09:02:44.322044, 1] ../../source3/smbd/smb2_tcon.c: 245(smbd_smb2_tree_connect) smbd_smb2_tree_connect: reject request to share [IPC$] as 'VEGA\myuser' without encryption or signing. Disconnecting.
[2024/05/09 09:02:44.322142, 3] ../../source3/smbd/smb2_server.c:3961(smbd_smb2_request_error_ex) smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] at ../../source3/smbd/smb2_tcon.c:151
I don't think this is a DNS problem because the client can resolve my '\\vega' server and because 'vega' works fine for other uses such as NFS and SSH.
So, any ideas? I can always use a fully qualified domain name or an IP address but I'd love to understand this because I wasted a couple hours trying to fix my Samba 'smb.conf' file when, apparently, it was just fine and instead there was this weird error with using the host's name.
P.S. I thought about posting this to samba but that group has about 600 members while DataHoarder has about 700,000.
submitted by Ben4425 to DataHoarder [link] [comments]


2024.05.09 04:19 Fitzgeezy Many AD accounts lockup, and growing

Over the past 8 days or so we have started seeing a huge increase in Active Directory account lockouts across our domain of about 700 users. We have seen it go from about 10 account locks per day, to over 60 locks today.
We are really struggling to find the root cause. We are following most of the usual account lock guidance, ie: EventComb, LockoutStatus, ADAuditPlus, check for Event ID 4740 on the DCs and check the calling computer. Well the calling computer is almost always our authenticating Internet Proxy server. We have already tried clearing Credential Manager, but the problem returns for these users.
The frustrating part is that we only see the 4740 event (account has been locked), but we don't see any preceding 4625 events (bad password) on the DC or client. Yes, I think we have all auditing enabled on the DCs. Without this evidence, we can't tell for sure which computer is sending the bad passwords to AD. I suspect the 4740 from the Proxy Server is just a symptom of the root problem, and some other service is actually sending the bad passwords, and then the Proxy finally just runs into the locked account and creates the 4740 on the DC.
I also wonder if it is some Kerberos problem, but I can't really find any useful event IDs for this theory either.
Does anybody have any advice on this?
submitted by Fitzgeezy to sysadmin [link] [comments]


2024.05.08 21:05 snyone trying to setup nfs v4.2+ only share but no luck... is it me or the docs are wrong?

Summary

I was trying to setup an nfs share on Fedora 39 with the goals of:
The client I am testing from is also Fedora (F40 tho instead of F39). And I have confirmed I can access things if I disable firewalld (e.g. sudo systemctl stop firewalld) on the host. But if I have the host firewall up (with named nfs service added), the client stalls out and is not able to mount the share.
I'm guessing that I could add specific ports or configure things differently on the host, but my understanding was that it should be working without anything else.
In addition to the RHEL documentation, I also tried searching online but most of the unofficial stuff I was finding either didn't mention the version, appeared to be doing v3 setup, and/or didn't appear to offer anything new for me to try / also did not work (for a v4.2 only setup anyway).

Questions

So my questions are...
  1. Is it the documentation (below) or my interpretation of it that is wrong? e.g. bad documentation, bad misunderstanding on my part, or both doc + my understanding are correct but there is some kind bug that applies specifically to the nfs4-only scenario? Mostly wondering in the event of bad doco / bugs if I should file something so it can be fixed by the maintainers. If it's just me, then that's fine.
  2. What additional ports or services should I add to allow nfs4 to work through the firewall? I have tried with recommendations for nfs3 (e.g. named services = mountd nfs rpc-bind + port 2049/tcp) but still have no luck. Is there any easy way to discover what ports the client is attempting to access? I'm not opposed to using wireshark but currently have no clue how to use it (for this or in general).

Documentation I was following

per RHEL 9 documentation, if one is setting up an nfs v4 only server like me, the claim is that:
Using only NFSv4 on the server reduces the number of ports that are open to the network.
and the documentation shows that it is safe to completely disable related nfs3 services, e.g.
# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket 
and open only the nfs service in firewalld, e.g.
# firewall-cmd --permanent --add-service nfs # firewall-cmd --reload 
Documentation on RHEL 8 seems to imply much the same thing:
Perform this procedure only if the client is using NFSv4.0. In that case, it is necessary to open a port for NFSv4.0 callbacks.
This procedure is not needed for NFSv4.1 or higher because in the later protocol versions the server performs callbacks on the same connection that was initiated by the client.
This is in reference to setting /proc/sys/fs/nfs/nfs_callback_tcpport but it states that it is only needed for v4.0 so my assumption is that I don't need this for v4.2.
I eventually thought to attempt this anyway when nothing else worked but I got the error:
# sysctl -p /etc/sysctl.d/90-nfs-callback-port.conf sysctl: cannot stat /proc/sys/fs/nfs/nfs_callback_tcpport: No such file or directory 
and did not proceed further since I haven't read up on this much and it seems to be specific to a version I'm trying to avoid anyway.

Host setup

I edited /etc/nfs.conf to disable v3 and v4.0 / v4.1 (I only have Fedora clients running either F39 or F40 so restricting to v4.2 should not be an issue).
# grep -v '^\s*#' /etc/nfs.conf [general] [nfsrahead] [exports] [exportfs] [gssd] use-gss-proxy=1 [lockd] [exportd] [mountd] [nfsdcld] [nfsdcltrack] [nfsd] vers3=n vers4.0=n vers4.1=n rdma=y rdma-port=20049 [statd] [sm-notify] # ip -4 -o -br addr grep enp0s3 enp0s3 UP 192.168.1.12/24 # grep -v '^\s*#' /etc/exports.d/local-share.exports /media/hdd2/nfs 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash) # ls -acl '/media/hdd2/nfs' total 12 drwxrwxr-x. 2 myuser myuser 4096 May 7 18:23 . drwxr-xr-x. 15 myuser myuser 4096 May 7 16:27 .. -rw-rw-r--. 1 myuser myuser 27 May 7 17:25 testfile.txt # getenforce Enforcing # getsebool -a grep -i nfs cobbler_use_nfs off colord_use_nfs off conman_use_nfs off ftpd_use_nfs off git_cgi_use_nfs off git_system_use_nfs off httpd_use_nfs off ksmtuned_use_nfs off logrotate_use_nfs off mpd_use_nfs off nagios_use_nfs off nfs_export_all_ro on nfs_export_all_rw on nfsd_anon_write off openshift_use_nfs off polipo_use_nfs off samba_share_nfs off sanlock_use_nfs off sge_use_nfs off tmpreaper_use_nfs off use_nfs_home_dirs off virt_use_nfs on xen_use_nfs off # systemctl stop nfs-server # systemctl enable --now nfs-server # cat /proc/fs/nfsd/versions -3 +4 -4.0 -4.1 +4.2 # firewall-cmd --state running # firewall-cmd --list-all public (default, active) target: default ingress-priority: 0 egress-priority: 0 icmp-block-inversion: no interfaces: enp0s3 sources: services: dhcpv6-client mdns nfs ssh ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: 
I have also tried with:
 services: dhcpv6-client mdns mountd nfs rpc-bind ssh ports: 2049/tcp 20049/tcp 
and gotten the same results (2049 since it was mentioned in an old reddit thread on nfs3 and 20049 bc it was in the default settings of my /etc/nfs.conf file)

Client

when host firewall is completely off (sudo systemctl stop firewalld), this works fine
# mount -t nfs4 192.168.1.12:/media/hdd2/nfs /mnt/ # ls -acl /mnt total 4.0K -rw-rw-r--. 1 myuser myuser 28 May 8 14:43 testfile.txt # cat /mnt/testfile.txt Is it time for a drink yet? # echo 'Yes!' >> /mnt/testfile.txt # cat /mnt/testfile.txt Is it time for a drink yet? Yes! # umount /mnt 
but when the host file wall is running, the mount command hangs and eventually errors out
# time mount -t nfs4 192.168.1.12:/media/hdd2/nfs /mnt/ mount.nfs4: No route to host for 192.168.1.12:/media/hdd2/nfs on /mnt real 2m5.019s user 0m0.003s sys 0m0.009s 
submitted by snyone to linuxquestions [link] [comments]


2024.05.08 11:22 nefarious_bumpps Struggling with NPM, LetsEncrypt and GoDaddy DNS.

I have a small home network, currently running a pfSense firewall, TrueNAS server with NextCloud, a Proxmox server with a few containers and VM's, including a VM running Docker with a dozen containers. Nothing on my home network is currently exposed directly to the Internet except for Wireguard VPN.
My domain name is registered and public DNS is provided by GoDaddy, which was a requirement to configure my Microsoft 365 Family subscription custom email domain. I also host my public website with GoDaddy, using a GoDaddy SSL certificate.
DNS on my LAN is resolved via unbound on my pfSense firewall, using automatic enrollment by DHCP.
I recently installed Nginx Proxy Manager with the intention of eliminating the hassle of specifying port numbers and approving self-signed certificates. However, I've been unable to get NPM/Certbot to create a LetsEncrypt certificate using DNS-01 challenge with GoDaddy. I'm seeing the following errors in my NPM logs:
2024-05-08 01:01:43,131:DEBUG:certbot.plugins.dns_common_lexicon:Encountered error finding domain_id during deletion: Error determining zone identifier for xxxxxxx.com: 403 Client Error: Forbidden for url: https://api.godaddy.com/v1/domains/xxxxxxx.com. Traceback (most recent call last): File "/opt/certbot/lib/python3.11/site-packages/certbot/plugins/dns_common_lexicon.py", line 250, in _resolve_domain with Client(self._build_lexicon_config(domain_name)): File "/opt/certbot/lib/python3.11/site-packages/lexicon/client.py", line 168, in __enter__ raise e File "/opt/certbot/lib/python3.11/site-packages/lexicon/client.py", line 161, in __enter__ provider.authenticate() File "/opt/certbot/lib/python3.11/site-packages/lexicon/_private/providers/godaddy.py", line 62, in authenticate result = self._get(f"/domains/{domain}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/certbot/lib/python3.11/site-packages/lexicon/interfaces.py", line 162, in _get return self._request("GET", url, query_params=query_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/certbot/lib/python3.11/site-packages/lexicon/_private/providers/godaddy.py", line 338, in _request result.raise_for_status() File "/opt/certbot/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.godaddy.com/v1/domains/xxxxxxx.com 
I'm not sure exactly what is causing the problem. I double-checked my GoDaddy API key and secret, even tried using a key I am using with pfSense for DDNS updates. Does anyone here have a similar setup that can offer any advice?
submitted by nefarious_bumpps to selfhosted [link] [comments]


2024.05.07 13:56 tempmailgenerator Utilizing Gmail with System.Net.Mail for Email Dispatch

Utilizing Gmail with System.Net.Mail for Email Dispatch

https://preview.redd.it/qddfcep7uzyc1.png?width=1024&format=png&auto=webp&s=398208dbb841c8fa5df6fea04a2058da26c0d129

Email Integration Mastery with Gmail and System.Net.Mail

Email has become an indispensable tool in our daily communication, serving as a bridge for both personal and professional interactions. In the realm of software development, the ability to programmatically send emails can significantly enhance the functionality of applications, providing immediate communication capabilities. This is where integrating Gmail with System.Net.Mail comes into play, offering a streamlined approach to dispatch emails directly from within .NET applications.
Using Gmail as an SMTP server through System.Net.Mail not only simplifies the email sending process but also leverages Gmail's reliable and secure infrastructure. This integration enables developers to send emails, including attachments and HTML content, with minimal setup. Such capability is crucial for applications requiring notifications, password resets, or any form of automated correspondence, making it a valuable skill for developers to master.
Why don't scientists trust atoms anymore?Because they make up everything!
CommandDescriptionSmtpClientRepresents an SMTP client in .NET, used to send emails.MailMessageRepresents an email message that can be sent using SmtpClient.NetworkCredentialProvides credentials for password-based authentication schemes such as basic, digest, NTLM, and Kerberos authentication.EnableSslA boolean property that specifies whether the SmtpClient uses SSL to encrypt the connection.

Setting Up SMTP Client for Gmail

C# Example
using System.Net; using System.Net.Mail; var smtpClient = new SmtpClient("smtp.gmail.com") { Port = 587, Credentials = new NetworkCredential("yourEmail@gmail.com", "yourPassword"), EnableSsl = true, }; 

Sending an Email

C# Implementation
var mailMessage = new MailMessage { From = new MailAddress("yourEmail@gmail.com"), Subject = "Test Subject", Body = "Hello, this is a test email.", IsBodyHtml = true, }; mailMessage.To.Add("recipientEmail@gmail.com"); smtpClient.Send(mailMessage); 

Exploring Email Automation with Gmail and .NET

Email automation has become a cornerstone in modern application development, providing a seamless way for applications to communicate with users. Leveraging the power of Gmail's SMTP server through the System.Net.Mail namespace in .NET allows developers to implement robust email sending functionalities within their applications. This capability is not just about sending simple text emails; it extends to sending emails with attachments, HTML content, and even with custom headers for advanced scenarios such as email tracking. The integration of Gmail with System.Net.Mail in .NET projects presents a reliable and secure method for email dispatch, taking advantage of Gmail's efficient delivery system and strong security measures to protect sensitive information.
Furthermore, this approach facilitates the automation of various communication processes, such as user verification emails, newsletters, and system notifications, among others. It enables developers to programmatically control the email's content, recipient, and sending time, making it an invaluable tool for creating dynamic, responsive applications. However, it's essential to handle this power responsibly by ensuring the security of user credentials and adhering to anti-spam laws to maintain a trustful relationship with users. The process of setting up and using Gmail's SMTP server with System.Net.Mail is straightforward, but it requires attention to detail to configure the SMTP client correctly, especially regarding security settings like SSL and authentication. By mastering these aspects, developers can enhance their applications' functionality and reliability, ensuring a smooth and secure email communication experience.

Enhancing Communication with System.Net.Mail and Gmail

Integrating Gmail with System.Net.Mail for email automation offers a plethora of benefits for developers and businesses alike. This powerful combination enables the development of applications that can send emails with ease, leveraging Gmail's robust and secure infrastructure. By using System.Net.Mail, developers can programmatically send emails, manage attachments, and customize email content with HTML, making it an ideal solution for a wide range of applications, from customer service tools to automated alert systems. The flexibility and reliability of Gmail's SMTP server ensure that emails are delivered promptly and securely, providing a seamless user experience.
Moreover, the integration supports advanced features such as setting priority levels for messages, specifying CC and BCC recipients, and implementing error handling mechanisms to manage issues related to email sending. These features are crucial for creating sophisticated email functionalities that can cater to the complex requirements of modern applications. With proper configuration and understanding of SMTP settings, developers can maximize the effectiveness of their email communications, making this integration a vital component of any application that requires email capabilities. However, it's important to adhere to best practices for email sending, such as respecting user privacy, avoiding spamming, and ensuring that emails are properly authenticated to prevent being flagged as spam.

Frequently Asked Questions About System.Net.Mail and Gmail Integration

  1. Question: Can I use Gmail to send emails from any .NET application?
  2. Answer: Yes, you can use Gmail's SMTP server to send emails from any .NET application using System.Net.Mail.
  3. Question: Do I need to enable any settings in my Gmail account to use it with System.Net.Mail?
  4. Answer: Yes, you may need to enable "Less secure app access" in your Gmail account, although it's recommended to use OAuth 2.0 for better security.
  5. Question: How do I handle attachments when sending emails with System.Net.Mail?
  6. Answer: Attachments can be added to the MailMessage object using the Attachments property, which accepts Attachment objects.
  7. Question: Is SSL required when using Gmail's SMTP server?
  8. Answer: Yes, SSL must be enabled for the SmtpClient when using Gmail's SMTP server to ensure secure email transmission.
  9. Question: Can I send HTML emails using System.Net.Mail with Gmail?
  10. Answer: Yes, you can set the IsBodyHtml property of the MailMessage object to true to send HTML emails.
  11. Question: How can I handle failed email delivery attempts?
  12. Answer: You can catch exceptions thrown by the SmtpClient.Send method to handle failed delivery attempts and take appropriate actions.
  13. Question: Can I send emails to multiple recipients at once?
  14. Answer: Yes, you can add multiple email addresses to the To, CC, and BCC properties of the MailMessage object.
  15. Question: How do I set the priority of an email sent through Gmail with System.Net.Mail?
  16. Answer: You can set the Priority property of the MailMessage object to control the email's priority.
  17. Question: Is it possible to track whether an email was opened or not?
  18. Answer: Email tracking typically requires embedding a tracking pixel or using specialized email tracking services; System.Net.Mail alone does not provide this functionality.

Mastering Email Automation: A Closing Reflection

As we've explored the integration of Gmail with System.Net.Mail, it's clear that this combination provides a robust framework for email automation within .NET applications. This functionality not only streamlines the process of sending emails but also opens up new avenues for application-to-user communication. Whether it's for sending notifications, confirmations, or promotional content, the ability to automate these communications reliably and securely is invaluable. However, developers must navigate this process with a keen eye on security, particularly in handling credentials and ensuring compliance with anti-spam regulations. Looking forward, as email remains a critical communication tool, leveraging these technologies effectively will continue to be a key skill for developers. This exploration underscores the importance of understanding both the technical and ethical considerations of email automation, ensuring that applications communicate effectively while respecting user privacy and trust.
https://www.tempmail.us.com/en/gmail/utilizing-gmail-with-system-net-mail-for-email-dispatch
https://www.tempmail.us.com/

submitted by tempmailgenerator to MailDevNetwork [link] [comments]


2024.05.05 14:05 AlpineGuy Discussion of the most common homelab network setups (open ports, closed ports, VPNs, let's encrypt, etc.)

Discussion of the most common homelab network setups (open ports, closed ports, VPNs, let's encrypt, etc.)
I am trying to redesign my homelab's networking setup and have a hard time deciding which option to go for.
I have seen around here mainly four different basic layouts that people use. I quickly created some diagrams to illustrate - see below (hope the basic outlines are understandable).
  • Option 1 - putting web services on the open internet - seems to be less and less desired, even though many howtos still describe this
  • Option 2 - having stuff behing a VPN but picking up public certificates from a VPS
  • Option 3 - private CA, private network, private everything
  • Option 4 - everything through tunnels, with the central point being a VPS
  • (Option 5 that I frequently read about here would be tailscale or some other VPN service, but it is technically more or less the same as my Option 4).
Which option do you use and why? Do you see additional pros/cons that I haven't seen? Do you have another setup not mentioned? Do you find any of the options absolutely bad?

https://preview.redd.it/vbguwl0vklyc1.jpg?width=731&format=pjpg&auto=webp&s=aad4d9d82403805e339394bfa13dcdf179877291

submitted by AlpineGuy to homelab [link] [comments]


http://swiebodzin.info