Proxy access to aol

Windscribe - Free VPN and Ad Block

2017.04.03 16:58 raywj1993 Windscribe - Free VPN and Ad Block

Windscribe is a VPN desktop application and VPN/proxy browser extension that work together to block ads, trackers, restore access to blocked content and help you safeguard your privacy online. windscribe.com
[link]


2023.01.09 18:08 vpn_fail vmess

A place to discuss V2RAY / VMESS proxy servers for free internet access.
[link]


2023.09.14 01:03 texasjobs bhproxy

✅ Unlimited Data: Say goodbye to data concerns. Stream, browse, and enjoy without any restrictions! ✅ Blazing Speeds of Up To 30 Mbps: Immerse yourself in lightning-fast connectivity, consistently ranging from 25 to 30 Mbps. ✅ Your Exclusive Personal Proxy: The proxy you lease from us is reserved exclusively for YOU. No shared access, guaranteed! Unbeatable Value: Access all these incredible features for just $150/month!
[link]


2024.05.15 02:06 saeched HTTP vs REST and Cognito vs IAM when using AWS as an authentication proxy for an on-premise API?

We have three different api routes:
/metrics /delete /status 

My plan:

However, I have some questions about how to do this best:
submitted by saeched to aws [link] [comments]


2024.05.15 00:54 prehensilefail 1U & 1 WAN IP... best practise for basic PVE install? i.e. how to setup WAN/LAN access.

Hiya, so a friend has offered me 1U of rack space for no charges for a year and with a single WAN IP. I have a suitable server ready to go with Proxmox installed... Since I only have the 1 IP & port to connect to at the DC,.. I'm thinking a PFSense VM as the routefirewall with one of the NICs passed through,... and PVE etc behind this... then use a reverse proxy service to handle the CT/VM traffic...and maybe Headscale to service/access PVE via a management VM...? I'm not planning on putting anything sensitive or valuable on it.. but it's a great opportunity to learn. So I guess the TLDNr is How best to use 1 port & 1 WAN IP. I know the PF VM has to boot for anything to work,..which is a weakness,... are there better ways..? Wondering aloud atm as usually not so constricted at home or at work...
submitted by prehensilefail to Proxmox [link] [comments]


2024.05.15 00:30 Spidey1980 Looking for a Javascript library code review, under 3k lines.

I have a pure Javascript windowing library and am in need of a code review. It is basically a web app os. It draws a lot upon AngularJS concepts and architecture, has 30+ implemented window options, is fully customizable, and it is responsive to browser resize. It has a setter proxy for reactive automatic changes, and has data binding and click binding all like AngularJS. It takes a start function on init, and in that gives access to the system as AngularJS does $scope while only returning an appID to global scope. It sets a hidden security div, if another instance is attempted to be created with code typed in the address bar it checks for the security div gives a warning and does nothing for security. Please find my alpha release and a demo html file on git hub: https://github.com/Akadine/jsWin
submitted by Spidey1980 to codereview [link] [comments]


2024.05.15 00:25 Spidey1980 Could someone do a Javascript code review for me?

I have a pure Javascript windowing library and am in need of a code review. It is basically a web app os. It draws a lot upon AngularJS concepts and architecture, has 30+ implemented window options, is fully customizable, and it is responsive to browser resize. It has a setter proxy for reactive automatic changes, and has data binding and click binding all like AngularJS. It takes a start function on init, and in that gives access to the system as AngularJS does $scope while only returning an appID to global scope. It sets a hidden security div, if another instance is attempted to be created with code typed in the address bar it checks for the security div gives a warning and does nothing for security. Please find my alpha release and a demo html file on git hub: https://github.com/Akadine/jsWin
submitted by Spidey1980 to learnprogramming [link] [comments]


2024.05.15 00:02 OkAdministration6696 Docker container intermittently offline - struggling to diagnose

Docker container intermittently offline - struggling to diagnose
Resolved: Issues with bridge network
Hi all,
I have a weird network issue with a docker container running on my host. It will be online and available, but every minute or so is unreachable before coming back online.
The container is Babybuddy and is currently very useful for my wife and I timing and tracking pumping, feeding etc, so really hoping to resolve this.
# curl hostip:8000 -v
* Trying hostip:8000...
* connect to hostip port 8000 failed: No route to host
* Failed to connect to hostip port 8000 after 6107 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to hostip port 8000 after 6107 ms: Couldn't connect to server
10 seconds later
curl hostip:8000 -v
* Trying hostip:8000...
* Connected to hostip (hostip) port 8000 (#0)
< HTTP/1.1 302 Found
< Server: nginx
< Date: Tue, 14 May 2024 21:11:31 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Location: /login/?next=/
< Expires: Tue, 14 May 2024 21:11:31 GMT
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private
< X-Frame-Options: DENY
< Vary: Accept-Language, Cookie
< Content-Language: en-US
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
<
* Connection #0 to host hostip left intact
If I complete the same thing using the docker ip 172.x.x.x the same thing occurs. Works then fails. This is the same general experience when accesses the web interface.
If I curl to another container on the same bridge network I never have this problem, but the connection response is quite different - but expected (below is Overseerr).
curl hostip:5055 -v
* Trying hostip:5055...
* Connected to hostip (hostip) port 5055 (#0)
< HTTP/1.1 307 Temporary Redirect
< X-Powered-By: Express
< Location: /login
< Date: Tue, 14 May 2024 21:14:08 GMT
< Connection: keep-alive
< Keep-Alive: timeout=5
< Transfer-Encoding: chunked
<
* Connection #0 to host hostip left intact
The container is on a bridge network. Bridge network contains about 10 other containers, all (except for portainer, detailed at the bottom) with no issues.
Babybuddy is deployed with 8000:8000, but I have tried various ports without success.
# docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Babybuddy: 80/tcp, 443/tcp, 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
Portainer: 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp
Nginx Proxy Manager: 80/tcp, 443/tcp, 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
The bold section seems off to me but stop/rm Nginx Proxy Manager and the problems continue without those ports present. 80/443 are not deployed ports for bb.
https://preview.redd.it/bu58v5zetg0d1.png?width=262&format=png&auto=webp&s=67a5c19cae39d3194cec063eed5125a306806bf0
netstat -tulpn grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 3007925/docker-prox
tcp6 0 0 :::8000 :::* LISTEN 3007931/docker-prox
I have about 4 clients hitting this container. Home Assistant integrations, 2 mobiles, 1 desktop. In the homeassistant integration you can see the entities dropping unavailable and coming back constantly.
The only other weird issue on my host is Portainer runs very slow occasionally. Loading any part of the portainer gui will pause for 5-10 seconds and then load. Curl to portainer can also take a while sometimes but always eventually responds (could just be timeout settings). If I stop/rm portainer however this makes no difference to babybuddy.
Last resort is probably a macvlan, moving babybuddy to the HA addon, or just sticking this container on a Pi or similar. But I'm trying to consolidate given our baby situation! :)
Thanks in advance.
Openmediavault 7 10c20t 64GB, nvme disk. Barely any of the above is utilised. 
submitted by OkAdministration6696 to selfhosted [link] [comments]


2024.05.14 23:14 Joetunn What's the correct way to enable SSCL Certificate for https on Synology? Do I have to port forward for this?

Background: I want to use Actual Budget on my Synology Nase (locally) but for this it seems I need to use a correctly signed Subdomain (When i access my portainer Actual Budget via IP I get the message, that I need proper https).
So I wanted to set up Subdomains via Reverse Proxy as shwon in this guide on how to enable wildcard certificate.
Now the first step in all of this is enable https in DSM 7 for which i read this guide.
In the guide it says
Be sure to open the necessary ports through your router settings, both TCP/UDP. Most people forget to port forward their ports in their router settings. If the ports that point to your Synology NAS device are not opened, all the services you have activated on those specific ports will not work. This is a step overlooked by many but essential for the perfect functioning of HTTPS. All you need to do is give your Synology device permissions on ports 80, 443, 5001, both TCP and UDP. Follow the instructions in the image below and remember that each router has a different design than the one presented in this image below.
In theory it makes sense: In some way the internet needs access to verify the SSL-Certificate right?
But I can't wrap my head around why I would need to open ports to reach my goal which is simply to rund Actual Budget selfhosted in a docker (portainer) container.
Am I misunderstanding everything completely or what am I doing wrong?
submitted by Joetunn to synology [link] [comments]


2024.05.14 23:12 Sohmsss Best resources for learning

Hey guys apologies if this is not the best place for this post, I’m looking for some resources to learn a bit more, I’m running a spare PC as a server I mostly use it for hosting Nextcloud, Jellyfin, llama 3 with web ui, and a few docker containers for hosting things like vs code. I’m using Tailscale so I can only access on my Tailscale network as I’m not sure best practices for exposing ports outside my network. I think I need to be using something like a reverse proxy but I’m not too sure where to start, appreciate the help
submitted by Sohmsss to homelab [link] [comments]


2024.05.14 22:27 VAI_YR Securing Local Network Traffic with a Subdomain from DuckDNS

I own a domain but I don't want to point it directly to my home network to reduce the attack surface. However, it would be really handy to have an easy way to access services without memorizing ports, something like service.homelab.local
I've found duckdns.org and created a subdomain (e.g. mysubdomain.duckdns.org) that points to my local homelab VM's IP address. The goal was to use this subdomain with a reverse proxy like Traefik or Nginx Proxy Manager and secure it with Let's Encrypt SSL/TLS certificates. Most guides I've found are for domains registered with Cloudflare, like the basic setup from https://github.com/AdrienPoupa/docker-compose-nas
However, I'm struggling to get the DNS01 challenge working with my duckdns.org subdomain for Let's Encrypt.
Questions: What's the recommended way to securely access local services using a subdomain without exposing my main domain or IP? Has anyone successfully set up Traefik or Nginx Proxy Manager with a duckdns.org subdomain and Let's Encrypt? If so, could you share your configuration or point me to a relevant guide? Are there any potential security concerns or best practices I should be aware of when using a dynamic DNS service like duckdns.org for local network access? I'd really appreciate any advice or recommendations from the community on how to properly set this up.
submitted by VAI_YR to homelab [link] [comments]


2024.05.14 21:37 TrackingSystemDirect Employee Internet Monitoring - 5 Scary Ways Employee Internet Monitoring Technology Is Watching You At Work!

Employee Internet Monitoring - 5 Scary Ways Employee Internet Monitoring Technology Is Watching You At Work!

5 Scary Ways Employee Internet Monitoring Technology Is Watching You At Work!

Did you know your employer might have read every email or private message you sent on your work computer? Scary thought, isn't it? Why would they do that? Employee internet monitoring—that's the watchful system tracking your online work habits. Ever clicked on a website and then realized hours have slipped by? Employers are keen to cut down on that. Reading this, you'll learn why monitoring is in place, how it benefits you, and what you can do to bypass the 5 most common methods of employee internet monitoring.
https://spacehawkgps.com
https://i.redd.it/o7gawz8v1g0d1.gif
Learn more about GPS car tracking here: https://spacehawkgps.com
https://preview.redd.it/my0qd5b82g0d1.jpg?width=4200&format=pjpg&auto=webp&s=2b0c5e459103ff7188712416505371adf37d29fa

Top 5 Ways How Employers Monitor Internet Activity

Web Content Filtering

Web content filtering functions by employing software or hardware solutions that evaluate and control the websites or content categories an individual can access. For instance, a company might use this tool to block access to social media websites during work hours to ensure employees stay focused and prevent potential security risks associated with these platforms. This approach helps employers enhance employee productivity, maintain network security, and ensure compliance with company policies regarding internet usage.
How to bypass web content filtering. Employees may attempt to bypass web content filtering by using a VPN or accessing blocked content through a proxy server. However, it's crucial to note that these methods may violate company policies and could have consequences.

Firewall & IDS Logs

In your workplace, there are likely tools like firewalls and Intrusion Detection Systems (IDS) that generate logs tracking what happens on the company's network. These logs note things like the data going in and out, IP addresses, and connections. Your employer uses these logs to keep an eye out for unauthorized access, malware, or anything unusual happening online. For example, if you accidentally download something suspicious while at work, these logs record it. This helps your company respond quickly and look into anything that seems out of the ordinary, making sure everything stays secure and compliant with company policies.
How to bypass firewalls & IDS logs. Employees usually cannot directly bypass firewall and IDS logs, as they are backend security measures. However, if employees engage in activities that trigger security alerts, employers may investigate their actions based on the log data.

Keystroke Loggers

Keystroke loggers are surveillance software that track every key you press on your computer. Companies install them to monitor employee activity, ensuring work-related use and securing sensitive data. Imagine typing confidential client information; keystroke loggers record this to prevent data breaches. They also help enforce company policies by flagging non-work-related activities during office hours.
How to bypass keystroke loggers. Use on-screen keyboards or text-to-speech tools as they don't involve physical keystrokes. Additionally, encrypted communication apps can obscure the content of your messages, though this may not prevent loggers from detecting that you've sent a message. Always be aware that attempting to bypass company monitoring tools can violate company policy and have serious repercussions.

Employee Monitoring Software

Employee monitoring software is a tool your company might use to oversee your computer activities during work. Popular brands of employee monitoring software include Time Doctor, VeriClock, and InterGuard. And why might a company use these products? One example is to flag when you send an email containing sensitive company information, ensuring data security and policy compliance.
How to bypass employee monitoring. You could use personal devices during breaks for private communications. However, circumventing these systems can lead to disciplinary action or job loss, so always consider the consequences and adhere to your workplace's guidelines.

Network Traffic Analysis Tools

Network traffic analysis tools examine your internet use, identifying what sites and services you access while on the company network. Companies deploy these tools to spot unusual activity, like accessing high-risk websites, which could introduce security threats. Picture clicking on a streaming service during work hours; these tools alert IT that non-work-related traffic is occurring.
How to bypass network scrutiny. Consider using a virtual private network (VPN), although this may contravene company policies. Alternatively, use your own data plan on personal devices for non-work browsing to stay under the radar.
Related Content: How Companies Track Vehicle Fleets
https://i.redd.it/sr73waoj2g0d1.gif

7 Ways How To Tell If My Boss Is Spying On My Computer

  1. Be aware of alerts and notifications from time tracking and productivity measuring software on your company computer.
  2. Surf for common social media applications and see if they are blocked.
  3. Check the task manager on your computer to look out for any activity monitoring software that you may not be aware of.
  4. Compare the bandwidth allocation and application restrictions on your computer with a colleague’s. If your company computer has more restrictions, chances are you are being monitored by your boss.
  5. Indirectly ask the IT department of your office. This is because not all monitoring software leaves a presence in the task Manager. Some employee monitoring software is more advanced, they run in a stealth-mode and cannot simply be opened.
  6. Open your computer’s webcam to assess if it's operational without your approval.
  7. Read your job contract or your company’s employee handbook. If a clause for employee monitoring is present then surely your boss is keeping a check on your internet usage.

Legal Compliance Tips: Responsible Employee Monitoring for Business Owners

If you're a business owner, you're legally allowed to monitor your employees, provided there's a legitimate business interest. Striking the right balance between managing work processes and respecting employees' privacy is essential. Additionally, employees should be notified before any monitoring takes place. Consent requirements may vary depending on your location.
With the rise of remote work, employee internet usage monitoring software and user activity tracking tools are increasingly popular. These systems help improve productivity and protect against insider threats. While it's true that monitoring can boost network security and prevent data loss, it's important to recognize the potential privacy invasion that comes with it. To maintain trust, encourage employees to keep social media usage limited to personal devices and non-work hours. Here are some things you should consider before monitoring an employees' computer activities:
  • Understand local regulations: Research and familiarize yourself with employee monitoring laws in your region to ensure compliance.
  • Establish a clear policy: Create a comprehensive, written policy outlining the extent and purpose of monitoring and share it with employees.
  • Obtain consent: Obtain employee consent, if required by local laws, before implementing monitoring practices.
  • Focus on work-related activities: Limit monitoring to work-related internet usage and activities to minimize privacy invasion.
  • Be transparent: Clearly communicate the monitoring practices, tools, and objectives to your employees.
  • Avoid excessive surveillance: Steer clear of overly invasive methods, such as keystroke logging or unauthorized webcam access.
  • Regularly review your policy: Periodically review and update your monitoring policy to ensure it remains compliant with evolving legal requirements and best practices.
  • Respect personal boundaries: Refrain from monitoring employees during non-work hours or on personal devices.
  • Prioritize employee trust: Create a supportive work environment that respects privacy while maintaining productivity and security.

Frequently Asked Questions

Is Employee Internet Monitoring legal and necessary for my business?

Yes, Employee Internet Monitoring is generally legal, but laws vary by region, so it's crucial to understand local regulations (ACLU). Monitoring software can help track employees' web activity, bandwidth usage, and application usage, offering insights to improve productivity and maintain platform security. However, it's essential to strike a balance between monitoring and respecting employees' privacy.

How can I effectively monitor employee internet usage without invading privacy?

To monitor employee internet usage responsibly, establish a clear policy and communicate it with your staff. Focus on monitoring work-related online activities and use web filtering or data loss prevention tools to prevent access to inappropriate content or unauthorized file transfers. Be transparent about monitoring practices to maintain trust and avoid overly invasive methods like keystroke logging.

What are some of the best employee monitoring software options?

Popular employee monitoring software includes Time Doctor, ActivTrak, and Teramind. These tools provide a range of features like tracking time spent on tasks, monitoring app usage, and offering workforce management solutions. By comparing features, you can choose a tool that best aligns with your business needs and goals for increased productivity.

Can monitoring software help remote workers stay productive and engaged?

Absolutely! Monitoring remote employees' online activities can help you identify areas for improvement and provide tailored support. Tools like terminal servers or remote desktop solutions facilitate remote workforces, while features such as video recordings and behavior analytics help optimize remote employees' performance. Remember, it's essential to communicate expectations and foster a culture of trust.

How can I use the data from employee monitoring tools to improve my business?

Monitoring tools give insights into employees' time management, app usage, and web browsing habits. Use this data to identify trends, detect insider threats, and allocate resources more efficiently. Implement training programs, set performance benchmarks, and consider offering incentives for increased productivity. Ensure you use the data ethically and transparently to maintain a positive work environment.

Can My Employer See My Internet Activity?

Yes, your employer can use various workplace surveillance software and hardware to record everything you do online. Employee monitoring solutions use sophisticated tracking technologies, like geolocation, keystroke logging, and screenshots, video recording. All this data can be stored via cloud computing and can be run through complex algorithms to anticipate insider threats, measure individual and team productivity, as well as retrace various steps leading to any problems or data leaks.
Related Content: Are Employers Allowed To Track Employee Vehicles?
submitted by TrackingSystemDirect to GPStracking [link] [comments]


2024.05.14 20:36 LordXamon Mod guide for vanilla players, now updated for 1.5 [draft]

Let me share my 3000h of modded wisdom with you, my fellow vanilla comrades. My attempt here is to provide you with as many as possible improvements to the base game while keeping the style, balance, and content as vanilla as possible. As they say, when it works the best is when you don't realize it is there. I guarantee you that after playing a few dozen hours with these, you will no longer be able to tell what's vanilla and what's not.
I decided to cut my recommendations onto different types of lists, so it's easier for you. The most purists players can stick with QoL only. Or, even if you're a psychopath purist who doesn't any QoL improvement, you can still use the performance list.
Please, note that a lot of these mods come with options, to tune up your experience. It is recommended you give them a look.
Dependencies:
Performance
Quality of life
Minor changes
Mayor changes
Atmospheric changes
Bonus: comics! And the occasional animation, check the profiles of u/daleksdeservevictory , u/AzulCrescent , u/AetherealVanguard , u/Senseless0 , u/ATTF , u/Aelanna , srgrafo, u/Fonzawa , u/Ivancmedia , u/zyll3 , u/meto30 , u/AeolysScribbles , u/cavalier753 , u/GABESTFY , u/VectorData , u/arxian , u/Nguyenanh2132 , u/sorrowful_dance , u/meto30 , u/-desdinova- , u/truffli
submitted by LordXamon to test [link] [comments]


2024.05.14 20:33 SAV_NC Manage Your Squid Proxy Server Efficiently with This Python Script

🦑 Squid Proxy Manager Script

Hello fellow Python enthusiasts!
I've created a Python script that makes managing your Squid Proxy Server a breeze. If you're looking for an efficient and straightforward way to interact with your Squid server remotely, this script is for you. 🎉

What My Project Does

The Squid Proxy Manager script allows you to manage your Squid Proxy Server remotely using a simple command-line interface. Here are some of the key features:
  • Check Squid Service Status: Quickly check if your Squid service is running or not.
  • Start/Stop/Restart Service: Easily control the Squid service remotely.
  • View Logs: Access the latest entries in your Squid access logs.
  • View Configuration: Display the current Squid configuration file.
  • Update Configuration: Replace the existing Squid configuration with a new one.
  • Reload Service: Reload the Squid service to apply changes without restarting.

Target Audience

This script is designed for anyone who manages a Squid Proxy Server and prefers a command-line tool for remote management. If you are comfortable using Python and SSH, this tool will streamline your workflow and enhance your productivity.

Differences

Here are some aspects that make this Squid Proxy Manager script stand out:
  • Remote Management: Manage your Squid server without needing physical access, thanks to SSH connectivity.
  • Ease of Use: The script provides a simple and intuitive command-line interface, making it easy to perform various tasks.
  • Comprehensive Features: From checking service status to updating configurations and viewing logs, this script covers all essential Squid management tasks.
  • Error Handling and Logging: Detailed logging and error handling ensure you know exactly what's happening and can troubleshoot issues effectively.

🚀 Usage

  1. Installation:
    • Ensure you have the required libraries installed: bash pip install paramiko termcolor
  2. Running the Script:
    • Use the script with appropriate arguments to manage your Squid Proxy Server. Here's an example command to check the Squid service status: bash ./squid_proxy_manager.py 192.168.2.111 22 username password --check-status
  3. Updating Configuration:
    • Create a new configuration file (e.g., new_squid.conf) with your desired settings.
    • Run the script to update the Squid configuration: bash ./squid_proxy_manager.py 192.168.2.111 22 username password --update-config new_squid.conf

💻 Script Example

Here's a snippet of the script to give you an idea of its simplicity and functionality:
```python

!/usbin/env python3

import paramiko import argparse import logging import sys import os from termcolor import colored
class SquidProxyManager: def init(self, hostname, port, username, password): self.hostname = hostname self.port = port self.username = username self.password = password self.client = paramiko.SSHClient() self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
def connect(self): try: logging.info(colored("Attempting to connect to {}:{}".format(self.hostname, self.port), 'cyan')) self.client.connect(self.hostname, port=self.port, username=self.username, password=self.password) logging.info(colored(f"Connected to {self.hostname} on port {self.port}", 'green')) except Exception as e: logging.error(colored(f"Failed to connect: {e}", 'red')) sys.exit(1) def disconnect(self): self.client.close() logging.info(colored("Disconnected from the server", 'green')) def execute_command(self, command): logging.info(colored("Executing command: {}".format(command), 'cyan')) try: stdin, stdout, stderr = self.client.exec_command(command) stdout.channel.recv_exit_status() out = stdout.read().decode() err = stderr.read().decode() if err: logging.error(colored(f"Error executing command '{command}': {err}", 'red')) else: logging.info(colored(f"Successfully executed command '{command}'", 'green')) return out, err except Exception as e: logging.error(colored(f"Exception during command execution '{command}': {e}", 'red')) return "", str(e) # More functions here... 
def parse_args(): parser = argparse.ArgumentParser(description="Squid Proxy Manager") parser.add_argument('hostname', help="IP address of the Squid proxy server") parser.add_argument('port', type=int, help="Port number for SSH connection") parser.add_argument('username', help="SSH username") parser.add_argument('password', help="SSH password") parser.add_argument('--check-status', action='store_true', help="Check Squid service status") parser.add.add_argument('--start', action='store_true', help="Start Squid service") parser.add.add_argument('--stop', action='store_true', help="Stop Squid service") parser.add.add_argument('--restart', action='store_true', help="Restart Squid service") parser.add.add_argument('--view-logs', action='store_true', help="View Squid logs") parser.add.add_argument('--view-config', action='store_true', help="View Squid configuration") parser.add.add_argument('--update-config', help="Update Squid configuration with provided data") parser.add.add_argument('--reload', action='store_true', help="Reload Squid service") return parser.parse_args()
def main(): logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') args = parse_args() logging.info(colored("Initializing Squid Proxy Manager script", 'cyan'))
manager = SquidProxyManager(args.hostname, args.port, args.username, args.password) manager.connect() try: if args.check_status: manager.check_squid_status() if args.start: manager.start_squid() if args.stop: manager.stop_squid() if args.restart: manager.restart_squid() if args.view_logs: manager.view_squid_logs() if args.view_config: manager.view_squid_config() if args.update_config: if not args.update_config.endswith('.conf'): logging.error(colored("The provided file must have a .conf extension", 'red')) elif not os.path.isfile(args.update_config): logging.error(colored(f"Configuration file {args.update_config} not found", 'red')) else: try: with open(args.update_config, 'r') as config_file: config_data = config_file.read() manager.update_squid_config(config_data) except Exception as e: logging.error(colored(f"Error reading configuration file {args.update_config}: {e}", 'red')) if args.reload: manager.reload_squid() finally: manager.disconnect() logging.info(colored("Squid Proxy Manager operations completed", 'green')) 
if name == "main": main() ```

🌟 Benefits

  • Remote Management: No need to be physically present to manage your Squid server.
  • Ease of Use: Simple command-line interface for quick operations.
  • Versatility: Supports various Squid management tasks, from checking status to updating configurations and viewing logs.

📢 Get Involved!

If you find this script useful, feel free to give it a try and share your feedback. Contributions and suggestions are always welcome! Comments however, that are unhelpful and serve no purpose to better the script or the author in their python scripting abilities are not welcome! Keep the nasty to yourself.

Access the script

You can find the script here on GitHub.
Happy coding! 🚀
submitted by SAV_NC to Python [link] [comments]


2024.05.14 20:01 Only_Gold_1054 Crowdsourcing Alternative Commodity Market Data - Seeking Feedback

Hello Everyone!

I've been working on something that could potentially be interesting to the community, and would love your genuine feedbacks.

I've been talking with some of my friends working as traders in the space, particularly those focused on paper trading, who have shared fascinating insights into the lengths they go to gain an informational edge. Many are spending significant sums to establish direct access to key sites around the world, aiming to be the first to know about supply chain disruptions, inventory fluctuations, and other events that could move markets.
Interestingly, this real-time, on-the-ground intel often spreads first in private chats among exclusive networks of traders before hitting mainstream news channels. This allows those with access to act quickly on the information asymmetry.
I'm developing a crowdsourcing platform (using blockchains as payment rails) that aims to revolutionize this dynamic by connecting data providers ("Agents") directly with data consumers, enabling the efficient exchange of unique, actionable information that can inform trading strategies and risk management. The platform would feature verification mechanisms for data integrity, bonding curve pricing to reflect data value, and tokenized incentives for participation.
Key features of the platform include:
  1. Agents can create private data channels and set a bonding curve pricing model, where the price of access increases as more subscribers join. This acts as a proxy for the credibility and value of the Agent's information. (A Bonding Curve would look something like this: 1*72qA4WIL6LvWUnX6tmfviQ.png (640×480) (medium.com) where supply is the current number of subscribers.)
  2. Various verification mechanisms are in place for Agents, such as location tracking, email confirmation, daily check-ins, and sensor connections to ensure data integrity and reliability.
  3. Data consumers do not directly pay Agents, but rather interact with a smart contract that automatically handles payments and access control based on predefined rules.
  4. The platform includes a real-time chat feature for direct communication between Agents and subscribers, enabling more contextual and timely information sharing.
To all the community members, I'm seeking your valuable feedback on two key aspects:
  1. What kind of alternative data or market insights would you find most valuable and are currently not easily accessible to you? This could include data on supply chain flows, inventory levels, weather patterns, satellite imagery analysis, or any other information that could give you an edge in your trading decisions. I am aware alternative data industry is already a huge industry, but would there be valuable sources of data that are only accessible through last-mile, real-time data updates? (For example in these two posts on oil and coffee's influence from weather, would access to these information through a verified source provide you with information edge?)
  2. What do you foresee as potential drawbacks, risks, or pitfalls of such a platform? Are there any specific concerns around data quality, privacy, security, or regulatory compliance that we should be mindful of? Any thoughts on how to mitigate these risks and build trust in the platform?
Your input would be greatly appreciated as we shape the development of this crowdsourced data exchange platform. The goal is to create a mutually beneficial ecosystem that empowers commodity traders with timely, actionable insights while rewarding data providers for their valuable contributions.
Thank you in advance for your time and insights. I look forward to a fruitful discussion and learning from your experiences in the commodities trading space.
Best regards, OnlyGold
submitted by Only_Gold_1054 to Commodities [link] [comments]


2024.05.14 18:51 job-posts-usa-visa 05.14-3 Jobs with USA visa sponsorship

🌎 Please visit our new unique job portal that has 7000+ positions with USA visa sponsorship! move2usajobs .com. Free trial available!
🙏🙏🙏 PLEASE READ OUR FAQ HERE 🙏🙏🙏 lnkd .in/dSp6jC52
If you find the post useful, kindly like&share! The first website in the industry with real USA jobs for immigrants and foreigners! 👩❤️👨
⚡️NEW! H-1B (CAP and Exempt, Sponsors), J-1 (Internship, Traineeship, Work and Travel, Teaching), Studying in USA (Including Scholarships), CPT and OPT Sponsors, J-2, O-1, B-1/B-2, EB-5, EB-2, EB-3! About 3k$. If you are interested, please check lnkd .in/d65MsiuA
✈️ Try our Visa Getter, increase your chances to immigrate to the USA by 50% lnkd .in/d2vvukps
🌐 New free option - check your eligibility for different USA work visas here relocate2america .com
⚖️ Our legal adviser lnkd .in/dmZE8vgk
#hiring #usajobs #visasponsorship #jobs #jobsearch #findjob #career #applynow #ilovemyjob #usa #us #usjob #workinus #unitedstates #h1b #h2b #h2a #eb3 #greencard #international #abroad
Please delete spaces in the links to access the application pages
  1. Senior Analyst - Data Science
Visa sponsorship
New York, NY
https:// goo .su/ 92JJqLT
  1. Sugar Boilers
H-2 Visa sponsorship
$15.43 per hour
Lakeland, LA
achatman@almaplt .com
  1. Analog Design Engineer
Visa sponsorship
Tempe, AZ
https:// goo .su/ Hagg
  1. Apple and Peach Picker
H-2 Visa sponsorship
$15.81 per hour
Hendersonville, NC
robrob2970@gmail .com
  1. Staff Accountant
H1B Visa sponsorship
San Gabriel, CA
https:// goo .su/ zPe0B3i
  1. Amusement & Recreation Attendant – Food Concessions
H-2 Visa sponsorship
$11.44-$16.47 per hour
Hernando, FL
rudyseastcoast@aol .com
📺 Do you have a YouTube or TikTok channel or a blog? Mention us and earn! Our affiliate program: nkd .in/dBgbkvWW
🏤 Our telegram channel lnkd .in/gAb3HbTz
📲 Please contact us through the online chat on our website
👨💻For international developers and engineers with 3+ years of experience - developer2usa .com
🥖Get sponsored for access to the job portal (we provide one random person with yearly access when a yearly plan is purchased) lnkd .in/dtBrbTsi
When you buy a yearly plan, you sponsor one person in need for yearly access to our job portal!
🎒Our US Work Visa courses lnkd .in/dtnMz2mU
🎰 US Embassy Slots Booking lnkd .in/dFp2rRyp
Please help us grow, share this post with your auditory or simply like it!🙏 move2usajobs .com
submitted by job-posts-usa-visa to jobsUSAimmigration [link] [comments]


2024.05.14 18:22 Ur_Anemone Harvard Law expert explains Supreme Court First Amendment case Murthy v. Missouri

Harvard Law expert explains Supreme Court First Amendment case Murthy v. Missouri
According to President Ronald Reagan, “the nine most terrifying words in the English language are: ‘I’m from the government, and I’m here to help.’” The attorneys general of Missouri and Louisiana tend to agree, at least when it comes to federal government involvement in social media platforms’ content moderation policies…
On March 18, the justices will hear oral arguments in a case, Murthy v. Missouri, in which the two states and several individuals claim that federal officials violated the First Amendment in their efforts to “help” social media companies combat mis- and disinformation about COVID-19 and other matters…
This is one of several landmark social media cases the Court is hearing this term, including Lindke v. Freed and O’Connor-Ratcliff v. Garnier, in which they will decide if and when government officials may block private citizens from commenting on their personal social media accounts…
Former national security official and current Harvard Law lecturer, Timothy Edgar ’97, believes that both the states and the federal government have valid arguments, and argues that the justices should channel the spirit of that famous 18th century publisher and postmaster, Benjamin Franklin, who was a proponent of both neutrality and rational discourse…
Timothy Edgar:
Missouri among other states and individuals are arguing that the Biden administration’s involvement in trying to suppress COVID-19 misinformation, especially about vaccines, crossed the line from being public health education to being censorship, by proxy. They argue that the administration was making very aggressive, specific suggestions to those social media companies, either to remove or to downgrade certain kinds of posts, and that by doing that, they transformed the private decisions that those companies made — principally Facebook and Twitter, now X — into public decisions, and that would amount to censorship.
The federal government says this was a voluntary, cooperative effort between social media and the government to combat misinformation and improve public health. They also argue that the government has long engaged in public health education and that even if the government expresses its views bluntly, it has a responsibility to express those views. The First Amendment and concerns about censorship, they say, don’t prevent the government from expressing an opinion about what information is or isn’t truthful when it comes to public health…”
My opinion is that they’re both right and that we need to get some clarity from the courts about where that line is between engagement and public health…
In his early days, Franklin was a printer in Philadelphia and a postmaster. When he was criticized by a number of the citizens of Philadelphia for publishing a controversial essay, Franklin wrote a famous response called “An Apology for Printers,” which is a defense of the idea that printers should be neutral. Here’s the quote:
“Printers are educated in the Belief, that when Men differ in Opinion, both Sides ought equally to have the Advantage of being heard by the Publick; and that when Truth and Error have fair Play, the former is always an overmatch for the latter: Hence they chearfully serve all contending Writers that pay them well, without regarding on which side they are of the Question in Dispute.”
Franklin was defending the idea that there’s a role for service providers — publishers, printers, platforms — to share information and arguing that, if we say that they must agree with everything that’s on their service, then we cut off debate. It is an argument grounded in an enlightenment faith in the idea of rational discourse. Of course, it doesn’t answer the question of whether we should print literally everything — which Franklin did not believe — or when and how platforms should moderate content. But it embodies a certain faith in the marketplace of ideas.
Franklin is making two arguments in his essay. One is the enlightenment idea of rational debate: that the truth will win out. But it also has this very pragmatic point, which is that neutrality is good for business. Printers were natural monopolies in a way that social media platforms can be as well…to serve the public, you need a platform — a printing shop and now a digital platform — that maintains some level of neutrality in order to have a democratic system of government…
When the government communicates with distributors of information, in that case, book publishers, if they do it in a way that makes those businesses feel like they have no choice but to comply, then those actions will be seen as government actions. And they will be seen as a form of censorship that is prohibited by the First Amendment unless there’s some legal basis for censorship…
You can look at this example from Franklin’s life and see some of both sides of what the justices will be deciding in this case. The platform should be neutral. In general, they should aspire to further public debate and that, even when they think something they allow to remain posted to the platform is wrong, they should have some faith in rational discourse. But there is a line, and the platforms or the printers can draw that line where they choose...
The government has a responsibility to inform the public and to engage with digital platforms. They may even criticize digital platforms if they feel that their moderation decisions are being driven by private profits at the expense of the public interest…The government can make rational arguments. What it cannot do is to invoke its power — even implicitly — in a way that makes platforms feel they have no good option but to do what the government says… The government has an important role and responsibility here to be engaging with private platforms, and not just on public health, but on issues of terrorism, and extremism and violence, on issues of taking down illegal content like child sexual abuse material. When there are foreign, state sponsored disinformation campaigns, the government is uniquely positioned to let the platforms know about them. So, they need to be involved with Facebook, X, Google, YouTube, all the big social media companies…
…there’s a difference between X and Facebook and the New York Times. Platforms make content moderation decisions. The New York Times makes editorial decisions. Both are protected by the First Amendment, but they are different decisions, and different considerations apply when deciding when government pressure crosses the line. And this gets back to our discussion of Benjamin Franklin. In the social media space, content moderation may deprive a speaker of the practical ability to have access to digital public square...
submitted by Ur_Anemone to afterAWDTSG [link] [comments]


2024.05.14 17:52 ZealousidealNobody22 Home assistant Chat GPT4o intergration

i was on chat gpt and asked it to create an intergration for chatgpt into home assistant and this is what it came up with

Step 1: Home Assistant Custom Component

First, create a custom component in Home Assistant to interface with the ChatGPT API.
Directory structure:
luaCopy code /config/custom_components/chatgpt_integration/ -- __init__.py -- manifest.json -- services.yaml 
__init__.py:
pythonCopy code import openai import homeassistant.helpers.config_validation as cv import voluptuous as vol from homeassistant.core import HomeAssistant from homeassistant.helpers.typing import ConfigType, ServiceCall DOMAIN = 'chatgpt_integration' CONF_API_KEY = 'api_key' CONFIG_SCHEMA = vol.Schema({ DOMAIN: vol.Schema({ vol.Required(CONF_API_KEY): cv.string, }) }, extra=vol.ALLOW_EXTRA) def setup(hass: HomeAssistant, config: ConfigType): api_key = config[DOMAIN][CONF_API_KEY] openai.api_key = api_key def handle_chatgpt_request(call: ServiceCall): prompt = call.data.get('prompt') response = openai.Completion.create(engine="text-davinci-004", prompt=prompt, max_tokens=150) hass.states.set('chatgpt_integration.response', response.choices[0].text.strip()) hass.services.register(DOMAIN, 'ask', handle_chatgpt_request) return True 
manifest.json:
jsonCopy code { "domain": "chatgpt_integration", "name": "ChatGPT Integration", "version": "1.0", "requirements": ["openai"], "dependencies": [], "codeowners": [] } 
services.yaml:
yamlCopy code ask: description: "Ask ChatGPT a question" fields: prompt: description: "The question to ask ChatGPT" example: "What's the weather like today?" 

Step 2: Web Interface

Create a simple HTML file to serve as the front end for interacting with the ChatGPT service in Home Assistant.
index.html:
htmlCopy code     Home Assistant ChatGPT Interface    

Home Automation with ChatGPT


Step 3: Deploy the Web Interface

To serve the index.html file, you can use Home Assistant’s built-in web server or another web server like Nginx or Apache.
Option 1: Serving with Home Assistant (simpler) Place index.html in the /config/www directory and access it via http://your-home-assistant-url/local/index.html.
Option 2: Serving with Nginx (advanced) Set up an Nginx server to serve the index.html file and proxy requests to Home Assistant.

Step 4: Testing

  1. Ensure your Home Assistant is running and the custom component is properly loaded.
  2. Access the web interface via http://your-home-assistant-url/local/index.html.
  3. Enter a prompt and see the response from ChatGPT displayed.
This example demonstrates a basic integration. For a production-ready solution, consider adding error handling, security measures (like API key protection), and a more sophisticated frontend framework.
submitted by ZealousidealNobody22 to homeassistant [link] [comments]


2024.05.14 15:18 Inevitable_Noise_704 Devices showing in HA, but not on z2m webinterface

Devices showing in HA, but not on z2m webinterface
Hey everyone
I just got back into my home automation project after a looong break. About a year ago I set up HA, mosquitto and z2m as Docker containers, and I got everything working. The z2m webinterface was showing devices, and pairing stuff was a breeze. I'm using the SkyConnect dongle (the blue one).
Now, when getting back into it, I notice that all devices show up on HA, and I can control them just fine. The temperature sensors are showing updated data, and the lights are controllable. But I can't see any data on the z2m webinterface. There's just the menu points, and everything else is blank. No logs, nothing at all.
I restarted the ha-mosquitto-z2m container stack, and even tried removing the containers, repulling the images from latest versions, but no change. I have no clue what's going on, so I hope someone feels like helping.
Thanks, and have a great day!
EDIT: I'm managing the containers with Portainer, and I'm using Nginx Reverse Proxy to access the services from the browser. I doubt that should matter at all, but there it is for ya.
EDIT 2: I'm running the entire show on Ubuntu 23.10 on a Raspberry Pi 400.

My docker-compose.yml configuration:
services: homeassistant: container_name: homeassistant image: "ghcr.io/home-assistant/home-assistant:stable" volumes: - /opt/homeassistant/config:/config - /etc/localtime:/etc/localtime:ro restart: unless-stopped privileged: true ports: - 8123:8123 mosquitto: image: eclipse-mosquitto container_name: mosquitto volumes: - /opt/mosquitto:/mosquitto - /opt/mosquitto/data:/mosquitto/data - /opt/mosquitto/log:/mosquitto/log restart: unless-stopped ports: - 1883:1883 - 9001:9001 zigbee2mqtt: container_name: zigbee2mqtt image: koenkk/zigbee2mqtt restart: unless-stopped volumes: - /opt/zigbee2mqtt/data:/app/data - /run/udev:/run/udev:ro ports: # Frontend port - 8080:8080 environment: - TZ=Europe/[hidden] devices: # Make sure this matched your adapter location - /dev/ttyUSB0:/dev/ttyACM0 networks: default: external: true name: homeass_network 
From z2m webinterface:
https://preview.redd.it/glsdtym57e0d1.png?width=936&format=png&auto=webp&s=5842933f39816ff1e7c1059b091b134ea46b87c8
https://preview.redd.it/ehjxogi67e0d1.png?width=880&format=png&auto=webp&s=0be78b01a42fc9af907c84513f5f8156c5737712
submitted by Inevitable_Noise_704 to Zigbee2MQTT [link] [comments]


2024.05.14 15:02 marijuanahh Error: Local Access

'Your browser is misconfigured. Do not use the proxy to access the router console, localhost, or local LAN destinations.' i haven't used i2p for about a year, now when i run the app i cant access any sites and this is the error i get. Can anyone please help?
submitted by marijuanahh to i2p [link] [comments]


2024.05.14 13:18 s_deely Wireguard help

Currently I am trying to set up wg on my remote Linux server so I can tunnel into my home network and access my services which I have deployed on docker. A couple of these docker services I share amongst friends and family using nginx reverse proxy.
The problem is that when I enable the wg interface, my SSH session (I currently use ssh to configure wg) gets disconnected and also the services which I expose over nginx become unreachable. To resolve this, I connect to another PC on my home network and SSH into the server and disable the interface.
Is this the expected behavior? I had hoped that wg would simply allow me to tunnel into my home network while leaving everything else as is.
submitted by s_deely to WireGuard [link] [comments]


2024.05.14 12:21 rweninger Nextcloud Upgrade fron chart version 1.6.61 to 2.0.5 failed

I am not sure if I want to solve this issue actually, I just want to vent.
iX, what do you think yourself when you print out this error message to a "customer"?
I mean your installation of Kubernetes on a single host is crap and using helm charts that utterly break in an atomic chain reaction that way doesnt make it trustworthy. I am on the way to migrate nextcloud away again from TrueNAS to a docker host and just use TrueNAS as storage.
I dont care about sensible data down there, at the time of posting, this system isnt running anymore. Sorry if I annoy somebody.
[EFAULT] Failed to upgrade App: WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/ranchek3s/k3s.yaml Error: UPGRADE FAILED: execution error at (nextcloud/templates/common.yaml:38:4): Chart - Values contain an error that may be a result of merging. Values containing the error: Error: 'error converting YAML to JSON: yaml: invalid leading UTF-8 octet' TZ: UTC bashImage: pullPolicy: IfNotPresent repository: bash tag: 4.4.23 configmap: nextcloud-config: data: limitrequestbody.conf: LimitRequestBody 3221225472 occ: - #!/bin/bash uid="$(id -u)" gid="$(id -g)" if [ "$uid" = '0' ]; then user='www-data' group='www-data' else user="$uid" group="$gid" fi run_as() { if [ "$(id -u)" = 0 ]; then su -p "$user" -s /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" else /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" fi } run_as "$@" opcache.ini: opcache.memory_consumption=128 php.ini: max_execution_time=30 enabled: true nginx: data: nginx.conf: - events {} http { server { listen 9002 ssl http2; listen [::]:9002 ssl http2; # Redirect HTTP to HTTPS error_page 497 301 =307 https://$host$request_uri; ssl_certificate '/etc/nginx-certs/public.crt'; ssl_certificate_key '/etc/nginx-certs/private.key'; client_max_body_size 3G; add_header Strict-Transport-Security "max-age=15552000; includeSubDomains; preload" always; location = /robots.txt { allow all; log_not_found off; access_log off; } location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } location / { proxy_pass http://nextcloud:80; proxy_http_version 1.1; proxy_cache_bypass $http_upgrade; proxy_request_buffering off; # Proxy headers proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port 443; # Proxy timeouts proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; } } } enabled: true fallbackDefaults: accessModes: - ReadWriteOnce persistenceType: emptyDir probeTimeouts: liveness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readiness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 2 timeoutSeconds: 5 startup: failureThreshold: 60 initialDelaySeconds: 10 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 probeType: http pvcRetain: false pvcSize: 1Gi serviceProtocol: tcp serviceType: ClusterIP storageClass: "" global: annotations: {} ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 labels: {} minNodePort: 9000 image: pullPolicy: IfNotPresent repository: nextcloud tag: 29.0.0 imagePullSecret: [] ixCertificateAuthorities: {} ixCertificates: "1": CA_type_existing: false CA_type_intermediate: false CA_type_internal: false CSR: null DN: /C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com/ST=Tennessee/L=Maryville/subjectAltName=DNS:localhost can_be_revoked: false cert_type: CERTIFICATE cert_type_CSR: false cert_type_existing: true cert_type_internal: false certificate: -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- certificate_path: /etc/certificates/truenas_default.crt chain: false chain_list: - -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- city: Maryville common: localhost country: US csr_path: /etc/certificates/truenas_default.csr digest_algorithm: SHA256 email: info@ixsystems.com expired: false extensions: ExtendedKeyUsage: TLS Web Server Authentication SubjectAltName: DNS:localhost fingerprint: 8E:68:9D:0A:7D:A6:41:11:59:B0:0C:01:8C:AC:C4:F4:DB:F9:6B:2C from: Sat Dec 16 08:05:09 2023 id: 1 internal: "NO" issuer: external key_length: 2048 key_type: RSA lifetime: 397 name: truenas_default organization: iXsystems organizational_unit: null parsed: true privatekey: -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCj0Td5+Z4Chax0 NdoCgpixDfOSH+gdxChKOJ705EyFi812XExsZSfZBq/W7dFfHLxCJ2Zzgp8mO+Jc ypTIWb9UefJ3wZs+WoKVerHB12VYIRoRrI7T+/x+D0TFG8zPTGsBE/PgLvIK4wKH VJM+ibJHMIYIPRFUuVED0w5MVyWBUVDdV4STkeLarKxxayNuaFBehprPVqdWqtJi V4fhSBgMhC6qXZP0PDdUY597eJtlG/jj5gS8Bzp3CKs9jx1YC+e+G0/6vi3NzIv4 ELHzYulTe1JDU0OUNrjoa3MOILY5+Nb9I+5PQrFLzyvM3wos0B4AceiId92fOdKR 1EMjbsIxAgMBAAECggEAS/Su51RxCjRWwM9TVUSebcHNRNyccGjKUZetRFkyjd1D l/S1zrCcaElscJh2MsaNF5NTMo3HIyAzFdksYTUTvKSKYzKWu7OVxp9MGle3+sPm ZXmABBRbf0uvFEGOljOVjbtloXXC7n9RZdQ2LZIE4nNCQkGmboU6Zi6O+6CQmEOQ 9iyYJ8NyXtjDT2sVOpysAj3ga6tdtSosG7SQuo41t20mw6hbl08LhQP9LfZJyKCR 0x1cYny+XHifB6JQAt8crzHYpKaJc2tZd4dXJ1xDnm2Aa/Au5uEA01P/L3hf41sI cUmBhVf1z5m9yBsyaZnW6LzaR5tQwpnPWPEcNfuwLQKBgQDM1o8vwKCo435shpGE zCdqbvK4+J0XYmbgEwHId8xr9rzZ852lAhs6VO2WQQVMGUoWRaH44B3z1Jv9N5Qa 4RUwnTb1MERfzEjRwUuIWjtz34yAXko0iU3M0FYpIxDuKVJNOEO1Doey0lTUcIYQ sfRUVxxJZ3hpDo7RhPSZpwyBtwKBgQDMu8PFVQ5XRb90qaGqg+ACaXMfHXfuWzuJ UqgyNrvF6wqd9Z0Nn299m7EonE6qJftUqlqHC62OCBfqRBNkwOw40s7ORZvqUCkP 7WsWuJu4HqhS2we8yKRuqj520VP537ZeqnK64mDxDKBvL9ttCujbxy01WFWcdwkO sSAViAK7VwKBgQCAeNG1kYsyYfyY9I2wTJssFgoGGWftkroTL9iecwSzcj1gNXta Usfg/gNFieJYqEPfVC0Sev5OP7rWRlWNxj4UD4a4oV1A+E9zv1gwXOeM9ViZ6omA Cd3R55kik+u6dBA6fl9433Qco+6wjyKGthYYD8qd/1d2DLtmjY0cEbm2YQKBgH4/ Zuifm5lLhFVPaUa5zYAPQJM2W8da8OqsUtWsFLxmRQTE+ZT19Q1S3br6MDQR+drq tapDFEHaUcz/L6pYoRIlRKvEFvI1fiy5Lekz66ptFUUKlcnfPC6VwrEIQi16u33C w77ka/0Y2THXJAsoyBEG0KTtlNVIPgiWRv+gAHc/AoGATOlO6ZVhf0vWPIKBhajM ijWTNIX/iCNOheJEjLEPksG4LVpU16OphZL2m0nIyOryQ0Fmt7GHUfl3CXFhTH/P G47PzH+mLCQLp5TUIeNRQWScWNGGsf9J+MtwpxHMzUymDJySR4aot0bH3fge0MO1 QccFxNbLODRmJuYbSQB1HZQ= -----END PRIVATE KEY----- privatekey_path: /etc/certificates/truenas_default.key revoked: false revoked_date: null root_path: /etc/certificates san: - DNS:localhost serial: 477224443 signedby: null state: Tennessee subject_name_hash: 3193428416 type: 8 until: Thu Jan 16 08:05:09 2025 ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 ixExternalInterfacesConfiguration: [] ixExternalInterfacesConfigurationNames: [] ixVolumes: - hostPath: /mnt/Camelot/ix-applications/releases/nextcloud/volumes/ix_volumes/ix-postgres_backups mariadbImage: pullPolicy: IfNotPresent repository: mariadb tag: 10.6.14 ncConfig: additionalEnvs: [] adminPassword: d3k@M%YRBRcj adminUser: admin commands: [] cron: enabled: false schedule: '*/15 * * * *' dataDir: /vawww/html/data host: charon.weninger.local maxExecutionTime: 30 maxUploadLimit: 3 opCacheMemoryConsumption: 128 phpMemoryLimit: 512 ncDbHost: nextcloud-postgres ncDbName: nextcloud ncDbPass: XvgIoT84hMmNDlH ncDbUser: ��-��� ncNetwork: certificateID: 1 nginx: externalAccessPort: 443 proxyTimeouts: 60 useDifferentAccessPort: false webPort: 9002 ncPostgresImage: pullPolicy: IfNotPresent repository: postgres tag: "13.1" ncStorage: additionalStorages: [] data: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: data type: hostPath html: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: html type: hostPath isDataInTheSameVolume: true migrationFixed: true pgBackup: ixVolumeConfig: aclEnable: false datasetName: ix-postgres_backups type: ixVolume pgData: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata ixVolumeConfig: datasetName: pgData type: hostPath nginxImage: pullPolicy: IfNotPresent repository: nginx tag: 1.25.4 notes: custom: ## Database You can connect to the database using the pgAdmin App from the catalog
Database Details
- Database: \{{ .Values.ncDbName }}` - Username: `{{ .Values.ncDbUser }}` - Password: `{{ .Values.ncDbPass }}` - Host: `{{ .Values.ncDbHost }}.{{ .Release.Namespace }}.svc.cluster.local` - Port: `5432``
{{- $_ := unset .Values "ncDbUser" }} {{- $_ := unset .Values "ncDbName" }} {{- $_ := unset .Values "ncDbPass" }} {{- $_ := unset .Values "ncDbHost" }} Note: Nextcloud will create an additional new user and password for the admin user on first startup. You can find those credentials in the \/vawww/html/config/config.php` file inside the container. footer: # Documentation Documentation for this app can be found at https://www.truenas.com/docs. # Bug reports If you find a bug in this app, please file an issue at https://ixsystems.atlassian.net header: # Welcome to TrueNAS SCALE Thank you for installing {{ .Chart.Annotations.title }} App. persistence: config: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/config subPath: config nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/config subPath: config type: hostPath username: null customapps: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/customapps subPath: custom_apps nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/custom_apps subPath: custom_apps type: hostPath username: null data: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/data subPath: data nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/data subPath: data type: hostPath username: null html: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html subPath: html nextcloud-cron: nextcloud-cron: mountPath: /vawww/html subPath: html postgresbackup: postgresbackup: mountPath: /nc-config type: hostPath username: null nc-config-limreqbody: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /etc/apache2/conf-enabled/limitrequestbody.conf subPath: limitrequestbody.conf type: configmap nc-config-opcache: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/opcache-z-99.ini subPath: opcache.ini type: configmap nc-config-php: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/nextcloud-z-99.ini subPath: php.ini type: configmap nc-occ: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /usbin/occ subPath: occ type: configmap nginx-cert: defaultMode: "0600" enabled: true items: - key: tls.key path: private.key - key: tls.crt path: public.crt objectName: nextcloud-cert targetSelector: nginx: nginx: mountPath: /etc/nginx-certs readOnly: true type: secret nginx-conf: defaultMode: "0600" enabled: true items: - key: nginx.conf path: nginx.conf objectName: nginx targetSelector: nginx: nginx: mountPath: /etc/nginx readOnly: true type: configmap postgresbackup: datasetName: ix-postgres_backups domain: null enabled: true hostPath: null medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgresbackup: permissions: mountPath: /mnt/directories/postgres_backup postgresbackup: mountPath: /postgres_backup type: ixVolume username: null postgresdata: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgres: permissions: mountPath: /mnt/directories/postgres_data postgres: mountPath: /valib/postgresql/data type: hostPath username: null themes: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/themes subPath: themes nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/themes subPath: themes type: hostPath username: null tmp: enabled: true targetSelector: nextcloud: nextcloud: mountPath: /tmp type: emptyDir podOptions: automountServiceAccountToken: false dnsConfig: options: [] dnsPolicy: ClusterFirst enableServiceLinks: false hostAliases: [] hostNetwork: false restartPolicy: Always runtimeClassName: "" terminationGracePeriodSeconds: 30 tolerations: [] portal: {} postgresImage: pullPolicy: IfNotPresent repository: postgres tag: "15.2" rbac: {} redisImage: pullPolicy: IfNotPresent repository: bitnami/redis tag: 7.0.11 release_name: nextcloud resources: NVIDIA_CAPS: - all limits: cpu: 4000m memory: 8Gi requests: cpu: 10m memory: 50Mi scaleCertificate: nextcloud-cert: enabled: true id: 1 scaleExternalInterface: [] scaleGPU: [] secret: {} securityContext: container: PUID: 568 UMASK: "002" allowPrivilegeEscalation: false capabilities: add: [] drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 568 runAsNonRoot: true runAsUser: 568 seccompProfile: type: RuntimeDefault pod: fsGroup: 568 fsGroupChangePolicy: OnRootMismatch supplementalGroups: [] sysctls: [] service: nextcloud: enabled: true ports: webui: enabled: true port: 80 primary: true targetPort: 80 targetSelector: nextcloud primary: true targetSelector: nextcloud type: ClusterIP nextcloud-nginx: enabled: true ports: webui-tls: enabled: true nodePort: 9002 port: 9002 targetPort: 9002 targetSelector: nginx targetSelector: nginx type: NodePort postgres: enabled: true ports: postgres: enabled: true port: 5432 primary: true targetPort: 5432 targetSelector: postgres targetSelector: postgres type: ClusterIP redis: enabled: true ports: redis: enabled: true port: 6379 primary: true targetPort: 6379 targetSelector: redis targetSelector: redis type: ClusterIP serviceAccount: {} workload: nextcloud: enabled: true podSpec: containers: nextcloud: enabled: true envFrom: - secretRef: name: nextcloud-creds imageSelector: image lifecycle: postStart: command: - /bin/sh - -c - echo "Installing ..." apt update && apt install -y --no-install-recommends \ echo "Failed to install binary/binaries..." echo "Finished." type: exec primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: postgres-wait: args: - -c - echo "Waiting for postgres to be ready" until pg_isready -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB}; do sleep 2 done command: bash enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: postgresImage resources: limits: cpu: 500m memory: 256Mi type: init redis-wait: args: - -c - - echo "Waiting for redis to be ready" until redis-cli -h "$REDIS_HOST" -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG; do echo "Waiting for redis to be ready. Sleeping 2 seconds..." sleep 2 done echo "Redis is ready!" command: bash enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage resources: limits: cpu: 500m memory: 256Mi type: init securityContext: fsGroup: 33 primary: true type: Deployment nginx: enabled: true podSpec: containers: nginx: enabled: true imageSelector: nginxImage primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: 01-wait-server: args: - -c - - echo "Waiting for [http://nextcloud:80]"; until wget --spider --quiet --timeout=3 --tries=1 http://nextcloud:80/status.php; do echo "Waiting for [http://nextcloud:80]"; sleep 2; done echo "Nextcloud is up: http://nextcloud:80"; command: - bash enabled: true imageSelector: bashImage type: init type: Deployment postgres: enabled: true podSpec: containers: postgres: enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: ncPostgresImage primary: true probes: liveness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec readiness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec startup: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n fix_owner=\"true\"\n fix_perms=\"true\"\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n \ chown -R 999:999 \"$dir\"\n echo \"Finished changing ownership\"\n \ echo \"Ownership after changes:\"\n stat -c \"%u %g\" \"$dir\"\n \ fi\ndone\n" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: install type: Deployment postgresbackup: annotations: helm.sh/hook: pre-upgrade helm.sh/hook-delete-policy: hook-succeeded helm.sh/hook-weight: "1" enabled: true podSpec: containers: postgresbackup: command: - sh - -c - echo 'Fetching password from config.php' # sed removes ' , => spaces and db from the string POSTGRES_USER=$(cat /nc-config/config/config.php grep 'dbuser' sed "s/dbuser ',=>//g") POSTGRES_PASSWORD=$(cat /nc-config/config/config.php grep 'dbpassword' sed "s/dbpassword ',=>//g") POSTGRES_DB=$(cat /nc-config/config/config.php grep 'dbname' sed "s/dbname ',=>//g") [ -n "$POSTGRES_USER" ] && [ -n "$POSTGRES_PASSWORD" ] && [ -n "$POSTGRES_DB" ] && echo 'User, Database and password fetched from config.php' until pg_isready -U ${POSTGRES_USER} -h ${POSTGRES_HOST}; do sleep 2; done echo "Creating backup of ${POSTGRES_DB} database" pg_dump --dbname=${POSTGRES_URL} --file /postgres_backup/${POSTGRES_DB}$(date +%Y-%m-%d_%H-%M-%S).sql echo "Failed to create backup" echo "Backup finished" enabled: true envFrom: - secretRef: name: postgres-backup-creds imageSelector: ncPostgresImage primary: true probes: liveness: enabled: false readiness: enabled: false startup: enabled: false resources: limits: cpu: 2000m memory: 2Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/*; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n if [ $(stat -c %u \"$dir\") -eq 999 ] && [ $(stat -c %g \"$dir\") -eq 999 ]; then\n echo \"Ownership is correct. Skipping...\"\n fix_owner=\"false\"\n \ else\n echo \"Ownership is incorrect. Fixing...\"\n fix_owner=\"true\"\n \ fi\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n chown -R 999:999 \"$dir\"\n \ echo \"Finished changing ownership\"\n echo \"Ownership after changes:\"\n \ stat -c \"%u %g\" \"$dir\"\n fi\ndone" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: init restartPolicy: Never securityContext: fsGroup: "33" type: Job redis: enabled: true podSpec: containers: redis: enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage primary: true probes: liveness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec readiness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec startup: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 1001 securityContext: fsGroup: 1001 type: Deployment See error above values.`
submitted by rweninger to truenas [link] [comments]


2024.05.14 10:23 zlackool Fow people who are not able to access the site for Admit Card, use Proxy or VPN

Fow people who are not able to access the site for Admit Card, use Proxy or VPN submitted by zlackool to CUETards [link] [comments]


2024.05.14 10:01 AutoModerator Weekly Game Questions and Help Thread + Megathread Listing

Weekly Game Questions and Help Thread + Megathread Listing

Weekly Game Questions and Help Thread
Greetings all new, returning, and existing ARKS defenders!
The "Weekly Game Questions and Help Thread" thread is posted every Wednesday on this subreddit for all your PSO2:NGS-related questions, technical support needs and general help requests. This is the place to ask any question, no matter how simple, obscure or repeatedly asked.

New to NGS?

The official website has an overview for new players as well as a game guide. Make sure to use this obscure drop-down menu if you're on mobile to access more pages.
If you like watching a video, SEGA recently released a new trailer for the game that gives a good overview. It can be found here.

Official Discord server

SEGA run an official Discord server for the Global version of PSO2. You can join it at https://discord.gg/pso2ngs

Guides

The Phantasy Star Fleet Discord server has a channel dedicated to guides for NGS, including a beginner guide and class guides! Check out the #en-ngs-guides-n-info channel for those.
In addition, Leziony has put together a Progression Guide for Novices. Whether you're new to the game or need a refresher, this guide may help you!Note: this uses terminology from the JP fan translation by Arks-Layer, so some terms may not match up with their Global equivilents.

Community Wiki

The Arks-Visiphone is a wiki maintained by Arks-Layer and several contributors. You can find the Global version here. There you can find details on equipment, quests, enemies and more!

Please check out the resources below:

If you are struggling to get assistance here, or if you are needing help from community developers (for translation plugins, the Tweaker, Telepipe Proxy) in a live* manner, join the Phantasy Star Fleet Discord server. *(Please read and follow the server rules. Live does not mean instant.)
Please start your question with "Global:" or "JP:" to better differentiate what region you are seeking help for.
(Click here for previous Game Questions and Help threads)

Megathreads

/PSO2NGS has several Megathreads that are posted on a schedule or as major events such as NGS Headlines occur. Below are links to these.
submitted by AutoModerator to PSO2NGS [link] [comments]


2024.05.14 07:44 Murky_Egg_5794 CORS not working for app in Docker but work when run on simple dotnet command

Hello everyone, I am totally new to Docker and I have been stuck on this for around 5 days now. I have a web app where my frontend is using react and Node.js and my backend is using C#, aspNet, to run as a server.
I have handled CORS policy blocking as below for my frontend (running on localhost:3000) to communicate with my backend (running on localhost:5268), and they work fine.
The code that handles CORS policy blocking:
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; var builder = WebApplication.CreateBuilder(args); builder.Services.AddCors(options => { options.AddPolicy(name: MyAllowSpecificOrigins, policy => { policy.WithOrigins("http://localhost:3000/") .AllowAnyMethod() .AllowAnyHeader(); }); }); builder.Services.AddControllers(); builder.Services.AddHttpClient(); var app = builder.Build(); app.UseHttpsRedirection(); app.UseCors(MyAllowSpecificOrigins); app.UseAuthorization(); app.MapControllers(); app.Run(); 
However, when I implement Docker into my code and run the command docker run -p 5268:80 App to start Docker of my backend, I received an error on my browser:
Access to XMLHttpRequest at 'http://localhost:5268/news' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. 
I add Krestrel to appsetting.json to change the base service port as below:
 "Kestrel": { "EndPoints": { "Http": { "Url": "http://+:80" } } } 
Here is my Dockerfile:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll" ] 
Here is my launchSettings.json file's content:
{ "_comment": "For devEnv: http://localhost:5268 and for proEnv: https://kcurr-backend.onrender.com", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:19096", "sslPort": 44358 } }, "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:7217;http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }, } 
I did some research on this and found that I need to use NGINX to fixed it, so I add nginx.conf and tell docker to read nginx.config as well as below:
now my Dockerfile only has:
# Read NGIXN config to fixed CORS policy blocking FROM nginx:alpine WORKDIR /etc/nginx COPY ./nginx.conf ./conf.d/default.conf EXPOSE 80 ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ]mcr.microsoft.com/dotnet/sdk:7.0mcr.microsoft.com/dotnet/sdk:7.0 
here is nginx.conf:
upstream api { # Could be host.docker.internal - Docker for Mac/Windows - the host itself # Could be your API in a appropriate domain # Could be other container in the same network, like container_name:port server 5268:80; } server { listen 80; server_name localhost; location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Content-Type' 'application/json'; add_header 'Content-Length' 0; return 204; } add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; proxy_pass http://api/; } } 
when I build docker by running: docker build -t kcurr-backend . and then running command docker run -p 5268:80 kcurr-backend, no error shown on console as below:
2024/05/14 05:58:36 [notice] 1#1: using the "epoll" event method 2024/05/14 05:58:36 [notice] 1#1: nginx/1.25.5 2024/05/14 05:58:36 [notice] 1#1: built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014) 2024/05/14 05:58:36 [notice] 1#1: OS: Linux 6.6.22-linuxkit 2024/05/14 05:58:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/14 05:58:36 [notice] 1#1: start worker processes 2024/05/14 05:58:36 [notice] 1#1: start worker process 7 2024/05/14 05:58:36 [notice] 1#1: start worker process 8 2024/05/14 05:58:36 [notice] 1#1: start worker process 9 2024/05/14 05:58:36 [notice] 1#1: start worker process 10 2024/05/14 05:58:36 [notice] 1#1: start worker process 11 2024/05/14 05:58:36 [notice] 1#1: start worker process 12 2024/05/14 05:58:36 [notice] 1#1: start worker process 13 2024/05/14 05:58:36 [notice] 1#1: start worker process 14 
However, I still cannot connect my frontend to my backend and received the same error on the browser as before, I also received a new error on the console as below :
2024/05/14 05:58:42 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "GET /curcurrency-country HTTP/1.1", upstream: "http://0.0.20.148:80/curcurrency-country", host: "localhost:5268", referrer: "http://localhost:3000/" 2024/05/14 05:58:42 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "POST /news HTTP/1.1", upstream: "http://0.0.20.148:80/news", host: "localhost:5268", referrer: "http://localhost:3000/" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "POST /news HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "GET /curcurrency-country HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 
Does anyone know what I should do to fix the CORS policy blocking for my dockerized backend?
please help.
submitted by Murky_Egg_5794 to dotnetcore [link] [comments]


http://rodzice.org/