Facebook proxy server
Netflix
2008.11.22 00:38 Netflix
Unofficial Netflix discussion, and all things Netflix related! (Mods are not Netflix employees, but employees occasionally post here).
2013.04.20 19:28 captinbophus Hypixel
The Hypixel Network is a Minecraft server containing a variety of mini-games, including Bed Wars, SkyBlock, SMP, SkyWars, Murder Mystery, and more! We support versions 1.8 through 1.19! Play today on Minecraft Java > mc.hypixel.net
2014.07.30 17:32 Life Is Strange
Life is Strange is a series of games, published by Square Enix, revolving around a heavily story driven narrative that is affected by your choices. The games are developed by Don't Nod and Deck Nine Games.
2024.05.14 10:01 AutoModerator Weekly Game Questions and Help Thread + Megathread Listing
| Weekly Game Questions and Help Thread Greetings all new, returning, and existing ARKS defenders! The "Weekly Game Questions and Help Thread" thread is posted every Wednesday on this subreddit for all your PSO2:NGS-related questions, technical support needs and general help requests. This is the place to ask any question, no matter how simple, obscure or repeatedly asked. New to NGS? The official website has an overview for new players as well as a game guide. Make sure to use this obscure drop-down menu if you're on mobile to access more pages. If you like watching a video, SEGA recently released a new trailer for the game that gives a good overview. It can be found here. Official Discord server SEGA run an official Discord server for the Global version of PSO2. You can join it at https://discord.gg/pso2ngs Guides The Phantasy Star Fleet Discord server has a channel dedicated to guides for NGS, including a beginner guide and class guides! Check out the #en-ngs-guides-n-info channel for those. In addition, Leziony has put together a Progression Guide for Novices. Whether you're new to the game or need a refresher, this guide may help you!Note: this uses terminology from the JP fan translation by Arks-Layer, so some terms may not match up with their Global equivilents. Community Wiki The Arks-Visiphone is a wiki maintained by Arks-Layer and several contributors. You can find the Global version here. There you can find details on equipment, quests, enemies and more! Please check out the resources below: If you are struggling to get assistance here, or if you are needing help from community developers (for translation plugins, the Tweaker, Telepipe Proxy) in a live* manner, join the Phantasy Star Fleet Discord server. *( Please read and follow the server rules. Live does not mean instant.) Please start your question with "Global:" or "JP:" to better differentiate what region you are seeking help for. ( Click here for previous Game Questions and Help threads) Megathreads /PSO2NGS has several Megathreads that are posted on a schedule or as major events such as NGS Headlines occur. Below are links to these. submitted by AutoModerator to PSO2NGS [link] [comments] |
2024.05.14 09:35 InternationalOil336 need help setting up an auto Discovery proxy in Client PC
Hello everyone in installed and configured squid proxy on pfsense server and i want the PC client conected to the Pfsense firewall to auto discover proxy cause i put it the adress IP :3128 manually thanks
submitted by
InternationalOil336 to
PFSENSE [link] [comments]
2024.05.14 09:10 BringTheRaine01 Google Home plugin thumbnail won't load outside local network
I installed Scrypted on my Home Assistant OS server as an add-on. I setup the Unifi Protect, Scrypted Cloud, and Google Home plugins. The Scrypted Cloud plugin is configured to use a custom domain on Cloudflare that directs traffic through a reverse proxy (Traefik). Everything works fine when connected to my local network. The Google Home app camera thumbnails load and when I select a camera it streams without any issues. When I am not connected to my local network the only thing that doesn't work is the camera thumbnail. Selecting a camera loads a stream without issues. Does anyone have any ideas why the thumbnail won't load?
submitted by
BringTheRaine01 to
Scrypted [link] [comments]
2024.05.14 08:00 AutoModerator Weekly Questions and Answers Post - FAQ, New/Returning Player Questions, and Useful Starting Resources!
2024.05.14 07:44 Murky_Egg_5794 CORS not working for app in Docker but work when run on simple dotnet command
Hello everyone, I am totally new to Docker and I have been stuck on this for around 5 days now. I have a web app where my frontend is using react and Node.js and my backend is using C#, aspNet, to run as a server.
I have handled CORS policy blocking as below for my frontend (running on localhost:3000) to communicate with my backend (running on localhost:5268), and they work fine.
The code that handles CORS policy blocking:
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; var builder = WebApplication.CreateBuilder(args); builder.Services.AddCors(options => { options.AddPolicy(name: MyAllowSpecificOrigins, policy => { policy.WithOrigins("http://localhost:3000/") .AllowAnyMethod() .AllowAnyHeader(); }); }); builder.Services.AddControllers(); builder.Services.AddHttpClient(); var app = builder.Build(); app.UseHttpsRedirection(); app.UseCors(MyAllowSpecificOrigins); app.UseAuthorization(); app.MapControllers(); app.Run();
However, when I implement Docker into my code and run the command docker run -p 5268:80 App to start Docker of my backend, I received an error on my browser:
Access to XMLHttpRequest at 'http://localhost:5268/news' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I add Krestrel to appsetting.json to change the base service port as below:
"Kestrel": { "EndPoints": { "Http": { "Url": "http://+:80" } } }
Here is my Dockerfile:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll" ]
Here is my launchSettings.json file's content:
{ "_comment": "For devEnv: http://localhost:5268 and for proEnv: https://kcurr-backend.onrender.com", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:19096", "sslPort": 44358 } }, "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:7217;http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }, }
I did some research on this and found that I need to use NGINX to fixed it, so I add nginx.conf and tell docker to read nginx.config as well as below:
now my Dockerfile only has:
# Read NGIXN config to fixed CORS policy blocking FROM nginx:alpine WORKDIR /etc/nginx COPY ./nginx.conf ./conf.d/default.conf EXPOSE 80 ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ]mcr.microsoft.com/dotnet/sdk:7.0mcr.microsoft.com/dotnet/sdk:7.0
here is nginx.conf:
upstream api { # Could be host.docker.internal - Docker for Mac/Windows - the host itself # Could be your API in a appropriate domain # Could be other container in the same network, like container_name:port server 5268:80; } server { listen 80; server_name localhost; location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Content-Type' 'application/json'; add_header 'Content-Length' 0; return 204; } add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; proxy_pass http://api/; } }
when I build docker by running: docker build -t kcurr-backend . and then running command docker run -p 5268:80 kcurr-backend, no error shown on console as below:
2024/05/14 05:58:36 [notice] 1#1: using the "epoll" event method 2024/05/14 05:58:36 [notice] 1#1: nginx/1.25.5 2024/05/14 05:58:36 [notice] 1#1: built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014) 2024/05/14 05:58:36 [notice] 1#1: OS: Linux 6.6.22-linuxkit 2024/05/14 05:58:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/14 05:58:36 [notice] 1#1: start worker processes 2024/05/14 05:58:36 [notice] 1#1: start worker process 7 2024/05/14 05:58:36 [notice] 1#1: start worker process 8 2024/05/14 05:58:36 [notice] 1#1: start worker process 9 2024/05/14 05:58:36 [notice] 1#1: start worker process 10 2024/05/14 05:58:36 [notice] 1#1: start worker process 11 2024/05/14 05:58:36 [notice] 1#1: start worker process 12 2024/05/14 05:58:36 [notice] 1#1: start worker process 13 2024/05/14 05:58:36 [notice] 1#1: start worker process 14
However, I still cannot connect my frontend to my backend and received the same error on the browser as before, I also received a new error on the console as below :
2024/05/14 05:58:42 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "GET /curcurrency-country HTTP/1.1", upstream: "http://0.0.20.148:80/curcurrency-country", host: "localhost:5268", referrer: "http://localhost:3000/" 2024/05/14 05:58:42 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "POST /news HTTP/1.1", upstream: "http://0.0.20.148:80/news", host: "localhost:5268", referrer: "http://localhost:3000/" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "POST /news HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "GET /curcurrency-country HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-"
Does anyone know what I should do to fix the CORS policy blocking for my dockerized backend?
please help.
submitted by
Murky_Egg_5794 to
dotnetcore [link] [comments]
2024.05.14 07:38 Murky_Egg_5794 CORS not working for app in Docker but work when run on simple dotnet command
Hello everyone, I am totally new to Docker and I have been stuck on this for around 5 days now. I have a web app where my frontend is using react and node.js and my backend is using C#, aspNet, to tun as server.
I have handled CORS policy blocking as below for my frontend (running on localhost:3000) to communicate with my backend (running on localhost:5268), and they work fine.
The code that handles CORS policy blocking:
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; var builder = WebApplication.CreateBuilder(args); builder.Services.AddCors(options => { options.AddPolicy(name: MyAllowSpecificOrigins, policy => { policy.WithOrigins("http://localhost:3000/") .AllowAnyMethod() .AllowAnyHeader(); }); }); builder.Services.AddControllers(); builder.Services.AddHttpClient(); var app = builder.Build(); app.UseHttpsRedirection(); app.UseCors(MyAllowSpecificOrigins); app.UseAuthorization(); app.MapControllers(); app.Run();
However, when I implement Docker into my code and run the command docker run -p 5268:80 kcurr-backend to start Docker of my backend, I received an error on my browser:
Access to XMLHttpRequest at 'http://localhost:5268/news' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I add Krestrel to appsetting.json to change the base service port as below:
"Kestrel": { "EndPoints": { "Http": { "Url": "http://+:80" } } }
Here is my Dockerfile:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll" ]mcr.microsoft.com/dotnet/sdk:7.0mcr.microsoft.com/dotnet/sdk:7.0
Here is my launchSettings.json file's content:
{ "_comment": "For devEnv: http://localhost:5268 and for proEnv: https://kcurr-backend.onrender.com", "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "http://localhost:19096", "sslPort": 44358 } }, "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:7217;http://localhost:5268", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } }, }
I did some research on this and found that I need to use NGINX to fixed it, so I add nginx.conf and tell docker to read nginx.config as well as below:
now my Dockerfile has additional section:
# Get base SDK Image from Microsoft FROM AS build-env WORKDIR /app ENV ASPNETCORE_URLS=http://+:80 EXPOSE 80 # Copy the csproj and restore all of the nugets COPY *.csproj ./ RUN dotnet restore # Copy the rest of the project files and build out release COPY . ./ RUN dotnet publish -c Release -o out # Generate runtime image FROM WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT [ "dotnet", "backend.dll", "--launch-profile Prod" ] # Read NGIXN config to fixed CORS policy blocking FROM nginx:alpine WORKDIR /etc/nginx COPY ./nginx.conf ./conf.d/default.conf EXPOSE 80 ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ]
here is nginx.conf:
upstream api { # Could be host.docker.internal - Docker for Mac/Windows - the host itself # Could be your API in a appropriate domain # Could be other container in the same network, like container_name:port server 5268:80; } server { listen 80; server_name localhost; location / { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Content-Type' 'application/json'; add_header 'Content-Length' 0; return 204; } add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent, X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; proxy_pass http://api/; } }
when I build docker by running: docker build -t kcurr-backend . and then running command docker run -p 5268:80 kcurr-backend, no error shown on console as below:
2024/05/14 05:58:36 [notice] 1#1: using the "epoll" event method 2024/05/14 05:58:36 [notice] 1#1: nginx/1.25.5 2024/05/14 05:58:36 [notice] 1#1: built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014) 2024/05/14 05:58:36 [notice] 1#1: OS: Linux 6.6.22-linuxkit 2024/05/14 05:58:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024/05/14 05:58:36 [notice] 1#1: start worker processes 2024/05/14 05:58:36 [notice] 1#1: start worker process 7 2024/05/14 05:58:36 [notice] 1#1: start worker process 8 2024/05/14 05:58:36 [notice] 1#1: start worker process 9 2024/05/14 05:58:36 [notice] 1#1: start worker process 10 2024/05/14 05:58:36 [notice] 1#1: start worker process 11 2024/05/14 05:58:36 [notice] 1#1: start worker process 12 2024/05/14 05:58:36 [notice] 1#1: start worker process 13 2024/05/14 05:58:36 [notice] 1#1: start worker process 14
However, I still cannot connect my frontend to my backend and received the same error on browser as before, I also received a new error on the console as below :
2024/05/14 05:58:42 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "GET /curcurrency-country HTTP/1.1", upstream: "http://0.0.20.148:80/curcurrency-country", host: "localhost:5268", referrer: "http://localhost:3000/" 2024/05/14 05:58:42 [error] 7#7: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.65.1, server: localhost, request: "POST /news HTTP/1.1", upstream: "http://0.0.20.148:80/news", host: "localhost:5268", referrer: "http://localhost:3000/" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "POST /news HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-" 192.168.65.1 - - [14/May/2024:05:58:42 +0000] "GET /curcurrency-country HTTP/1.1" 502 559 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" "-"
Does anyone know what I should do to fix the CORS policy blocking for my dockerized backend?
please help.
submitted by
Murky_Egg_5794 to
docker [link] [comments]
2024.05.14 07:24 masterofrants [FOR HIRE] Experienced IT Consultant Seeking Part-Time role based in Vancouver, BC
Hey there, MSPs!
I'm on the hunt for a part-time IT role
(20 hours per week) hybrid/onsite/remote, I am based in Vancouver, BC, CA.
Incorporated as a sole proprietorship in British Columbia, Canada: I missed an opportunity some months back with a MSP based in US - I have now incorporated as a sole-prop in BC and open to accept offers from the USA!
Here's a snapshot of my skills and experience: Enrolled in the Computer and Information Systems PBD at Douglas College.
Skills: 8 years experience in deploying and supporting security solutions, including Palo Alto and Fortigate firewalls, F5 BIGIP product suite, Infoblox, and Broadcom/Symantec Proxy solutions.
Certifications: CISSP, Palo Alto PCNSE, F5 LTM (301a, 301b), Infoblox CDCA, F5 ASM, AZ 900, AZ 104.
Other skills: I'm a windows power user, well versed with Active directory concepts, familiar with linux as well, and can help MSPs with managed user issues and so on. However, I do not have direct experience with server management, however my google skills and strong conceptual foundation make up for it!
Hands-on experience with Infoblox, Palo Alto Traps.
Miscellaneous: A lot of roles on indeed here do ask for a driving license - I have a class 5 BC license and own a vehicle as well!
Here's my full cv hosted on google drive (personal info redacted). You can view it by just clicking this link: https://drive.google.com/file/d/1paeBTpGet1APnr06U0qWEKcaQy1LTkbd/view?usp=sharing If you're looking for a tech-savvy individual with a passion for IT and strong problem-solving skills, I'd love to connect.
Please drop a message or chat if you'd like to discuss potential opportunities, or if you know someone hiring and could point them here - I would deeply appreciate it!
Looking forward to exploring the possibilities!
Best regards!
submitted by
masterofrants to
mspjobs [link] [comments]
2024.05.14 06:38 ailm32442 Integrate GPT-4o into comfyui to achieve LLM visual functions!
| GPT-4o has been released, and I’m joining the excitement by enabling my comfyui agent open-source project to support GPT-4o integration into comfyui, achieving visual functions. The project address is: heshengtao/comfyui_LLM_party: A set of block-based LLM agent node libraries designed for ComfyUI development.(一组面向comfyui开发的积木化LLM智能体节点库) In my open-source project, you can use these features: - You can right-click in the comfyui interface, select `llm` from the context menu, and you will find the nodes for this project. [how to use nodes](how_to_use_nodes.md)
- Supports API integration or local large model integration. Modular implementation for tool invocation.When entering the base_url, please use a URL that ends with `/v1/`.You can use [ollama](https://github.com/ollama/ollama) to manage your model. Then, enter `http://localhost:11434/v1/` for the base_url, ollama for the api_key, and your model name for the model_name, such as: llama3. If the call fails with a 503 error, you can try turning off the proxy server.
- Local knowledge base integration with RAG support.
- Ability to invoke code interpreters.
- Enables online queries, including Google search support.
- Implement conditional statements within ComfyUI to categorize user queries and provide targeted responses.
- Supports looping links for large models, allowing two large models to engage in debates.
- Attach any persona mask, customize prompt templates.
- Supports various tool invocations, including weather lookup, time lookup, knowledge base, code execution, web search, and single-page search.
- Use LLM as a tool node.
- Rapidly develop your own web applications using API + Streamlit.The picture below is an example of a drawing application.
- Added a dangerous omnipotent interpreter node that allows the large model to perform any task.
- It is recommended to use the `show_text` node under the `function` submenu of the right-click menu as the display output for the LLM node.
https://preview.redd.it/5qlvjmaiob0d1.png?width=2100&format=png&auto=webp&s=5c04d31f6684d24da7729ed6771835f76ace78e9 submitted by ailm32442 to comfyui [link] [comments] |
2024.05.14 05:24 The_Dukes_Of_Hazzard Using home theatre pc as a proxy server for other devices on the same network?
So basically my devcies get turned off at 10 pm every night, but our home theatre pc runs windows 10 and it's internet stays on all night. Is there any way I can use it's internet connection and share it with my other devices? Preferably over the same network.
Also sorry if i sound dumb, i dont know a lot about networking.
submitted by
The_Dukes_Of_Hazzard to
HomeNetworking [link] [comments]
2024.05.14 05:19 Tracking_boss Remote GTM jobs in agencies
Hello everyone, I’m a "Web analytics and conversion tracking" freelancer and I love working in this field. Till now, I have consulted/worked with a lot of agencies/paid ads pros on project basis.
My specialty is in Google Tag Manager, Google Ads Conversion tracking, Facebook Pixel Conversion API, Google Analytics, Server side Tracking, Cookie Consent, etc.
I’m looking for such agencies or in-house teams who might need someone like me. I can set up all sorts of tracking including some simple Javascript & dataLayer.push() stuff, Enhanced Conversions, and can also work with cookies & consent modes.
Any leads would be greatly appreciated.
P.S. I can only work remotely.
submitted by
Tracking_boss to
GoogleTagManager [link] [comments]
2024.05.14 04:25 Ok-Kaleidoscope926 Can Lucky patcher work on a outdated but still running game app
There is this game called Bio Inc Nemesis and its not available on the PlayStore but it still running i downloaded the game on apkpure and apktoide. When i patched the game and tapped on the purchase it just says transaction error even when i changed the purchase method proxy server to (i dont remember what its called) it still doesn't work plz help me guys
submitted by
Ok-Kaleidoscope926 to
luckypatcher [link] [comments]
2024.05.14 04:03 RndmNerd I WANT TO RACE TONIGHT Sim Racing Using WSS (World Sim Series)
Hey everybody, I am a first time poster so excuse my lack of fancy spacing or links haha
I am trying to find more people to race with I'm on PC and like using WSS to setup my races ( I would use a similar app if there is a more popular option)
Anyway having a really hard time finding a discord server or even active facebook groups where I can find people to race with even on WSS in my time zone (MTN) can't seem to get enough people to start the events lately.
I have a Car Club discord server that we could use if nobody has suggestions.
I just want to race with a full lineup and make some friends hopefully the great narwall will provide :P
submitted by
RndmNerd to
simracing [link] [comments]
2024.05.14 03:24 Vexxicus Official appliance or Virtual?
Hey all - I've been looking at getting opnsense for a bit now and I've seen a lot of people mention they have it virtualized. I'm going to be setting up a Proxmox cluster soon and migrate over from Hyper - V. I'm wondering if what I'm wanting to do in the end will be possible with a VM, or possible to start with a VM and then backup / restore on an appliance.
- I'd like to eventually be able to handle a 10G network but I'm not there yet, more of a future proof item.
- I will have multiple VLANs and a managed switch - POE IP Cameras (future) on one, IOT devices on another, a main network, guest / client isolation network, maybe a media network but probably not really necessary.
- I'll be setting up a reverse proxy.
- I'll be playing around with firewall.
- Play around with any other cool things opnsense can do as I learn and play around with.
- I'll have 2 or 3 POE access points on the managed switch
The Proxmox hosts do have 8 1GB NICs each (2 servers) but I'm not sure if I can assign a VLAN to those NICs or how it works for network going out? and of course I'd have to upgrade later for 10GB NICs. If that doesn't work I was thinking of either the DEC675 or 695. Have many people bought the official appliances? How has your experience been? Would love to hear your experiences!
submitted by
Vexxicus to
opnsense [link] [comments]
2024.05.14 02:00 Alex-Lasdx Using nginx to achieve dynamic reverse proxy for paths and ports
I have deployed four services on the same machine, each on different ports. I'm trying to configure Nginx (using OpenResty) to dynamically request API ports and paths, but I've been struggling with this for five hours. I'm not very familiar with Nginx/OpenResty, and it keeps throwing errors. Below is my complete configuration and errotr: unknown "api_port" variable nginx: [emerg] unknown "api_port" variable
worker_processes auto; error_log /valog/nginx/error.log warn; pid /varun/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /valog/nginx/access.log main; sendfile on; keepalive_timeout 65; # Define server for a specific port server { listen 43321; location /error { root /vawww/html; internal; } } # Server for API redirection with error handling server { listen 44321; set $api_port "44321"; set $api_path "/error"; # Location for retrieving API path dynamically location /get-api-path { internal; proxy_pass http://127.0.0.1:43951/get-path; proxy_set_header Content-Length ""; proxy_set_header X-Server-IP $remote_addr; proxy_set_header X-Original-URI $request_uri; proxy_pass_request_body off; proxy_set_body ""; proxy_buffering on; proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_intercept_errors on; error_page 401 403 404 /error; } # Handling specific API path location /rest/starcat/steam { content_by_lua_block { local res = ngx.location.capture("/get-api-path"); if res.status == 200 then ngx.log(ngx.ERR, "Success: ", res.body); local port, path = string.match(res.body, "^(%d+),(.*)$") if port and path then local target_url = "http://127.0.0.1:" .. port .. path local proxy_res = ngx.location.capture(target_url) if proxy_res.status == 200 then ngx.print(proxy_res.body) else ngx.log(ngx.ERR, "Proxy failed. Status: ", proxy_res.status) ngx.exit(proxy_res.status) end else ngx.log(ngx.ERR, "Parsing error. Body: ", res.body); ngx.exit(444); end else ngx.log(ngx.ERR, "Capture failed. Status: ", res.status); ngx.exit(444); end } } # Proxy for error handling location @proxy { proxy_pass http://127.0.0.1:$api_port$api_path; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } # Include additional configurations # include /etc/nginx/conf.d/*.conf; # include /etc/nginx/upstreams/*.conf; # include /etc/nginx/snippets/*.conf; } worker_processes auto; error_log /valog/nginx/error.log warn; pid /varun/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /valog/nginx/access.log main; sendfile on; keepalive_timeout 65; # Define server for a specific port server { listen 43321; location /error { root /vawww/html; internal; } } # Server for API redirection with error handling server { listen 44321; set $api_port "44321"; set $api_path "/error"; # Location for retrieving API path dynamically location /get-api-path { internal; proxy_pass ; proxy_set_header Content-Length ""; proxy_set_header X-Server-IP $remote_addr; proxy_set_header X-Original-URI $request_uri; proxy_pass_request_body off; proxy_set_body ""; proxy_buffering on; proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_intercept_errors on; error_page 401 403 404 /error; } # Handling specific API path location /rest/starcat/steam { content_by_lua_block { local res = ngx.location.capture("/get-api-path"); if res.status == 200 then ngx.log(ngx.ERR, "Success: ", res.body); local port, path = string.match(res.body, "^(%d+),(.*)$") if port and path then local target_url = "http://127.0.0.1:" .. port .. path local proxy_res = ngx.location.capture(target_url) if proxy_res.status == 200 then ngx.print(proxy_res.body) else ngx.log(ngx.ERR, "Proxy failed. Status: ", proxy_res.status) ngx.exit(proxy_res.status) end else ngx.log(ngx.ERR, "Parsing error. Body: ", res.body); ngx.exit(444); end else ngx.log(ngx.ERR, "Capture failed. Status: ", res.status); ngx.exit(444); end } } # Proxy for error handling location u/proxy { proxy_pass http://127.0.0.1:$api_port$api_path; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } # Include additional configurations # include /etc/nginx/conf.d/*.conf; # include /etc/nginx/upstreams/*.conf; # include /etc/nginx/snippets/*.conf; } http://127.0.0.1:43951/get-path I have predefined the default port and path, but the configuration still throws errors. If you have a better solution or can spot what's wrong, please let me know. Any help would be greatly appreciated!
submitted by
Alex-Lasdx to
nginx [link] [comments]
2024.05.14 01:00 livia2lima Day 7 - The server and its services
INTRO
Today you'll install a common server application - the Apache2 web server - also known as
httpd - the "Hyper Text Transport Protocol Daemon"!
If you’re a website professional then you might do things slightly differently, but our focus with this is not on Apache itself, or the website content, but to get a better understanding of:
- application installation
- configuration files
- services
- logs
YOUR TASKS TODAY
- Install and run apache, transforming your server into a web server
INSTRUCTIONS
- Refresh your list of available packages (apps) by: sudo apt update - this takes a moment or two, but ensures that you'll be getting the latest versions.
- Install Apache from the repository with a simple: sudo apt install apache2
- Confirm that it’s running by browsing to http://[external IP of your server] - where you should see a confirmation page.
- Apache is installed as a "service" - a program that starts automatically when the server starts and keeps running whether anyone is logged in or not. Try stopping it with the command: sudo systemctl stop apache2 - check that the webpage goes dead - then re-start it with sudo systemctl start apache2 - and check its status with: systemctl status apache2.
- As with the vast majority of Linux software, configuration is controlled by files under the /etc directory - check the configuration files under /etc/apache2 especially /etc/apache2/apache2.conf - you can use less to simply view them, or the vim editor to view and edit as you wish.
- In /etc/apache2/apache2.conf there's the line with the text: "IncludeOptional conf-enabled/*.conf". This tells Apache that the *.conf files in the subdirectory conf-enabled should be merged in with those from /etc/apache2/apache2.conf at load. This approach of lots of small specific config files is common.
- If you're familiar with configuring web servers, then go crazy, setup some virtual hosts, or add in some mods etc.
- The location of the default webpage is defined by the DocumentRoot parameter in the file /etc/apache2/sites-enabled/000-default.conf.
- Use less or vim to view the code of the default page - normally at /vawww/html/index.html. This uses fairly complex modern web design - so you might like to browse to http://165.227.92.20/sample where you'll see a much simpler page. Use View Source in your browser to see the code of this, copy it, and then, in your ssh session sudo vim /vawww/html/index.html to first delete the existing content, then paste in this simple example - and then edit to your own taste. View the result with your workstation browser by again going to http://[external IP of your server]
- As with most Linux services, Apache keeps its logs under the /valog directory - look at the logs in /valog/apache2 - in the access.log file you should be able to see your session from when you browsed to the test page. Notice that there's an overwhelming amount of detail - this is typical, but in a later lesson you'll learn how to filter out just what you want. Notice the error.log file too - hopefully this one will be empty!
Note for AWS/Azure/GCP users
Don't forget to add port 80 to your instance security group to allow inbound traffic to your server.
POSTING YOUR PROGRESS
Practice your text-editing skills, and allow your "classmates" to judge your progress by editing /vawww/html/index.html with vim and posting the URL to access it to the forum. (It doesn’t have to be pretty!)
SECURITY
- As the sysadmin of this server, responsible for its security, you need to be very aware that you've now increased the "attack surface" of your server. In addition to ssh on port 22, you are now also exposing the apache2 code on port 80. Over time the logs may reveal access from a wide range of visiting search engines, and attackers - and that’s perfectly normal.
- If you run the commands: sudo apt update, then sudo apt upgrade, and accept the suggested upgrades, then you'll have all the latest security updates, and be secure enough for a test environment - but you should re-run this regularly.
EXTENSION
Read up on:
RESOURCES
TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!
Practice what you've learned with some challenges at
SadServers.com:
PREVIOUS DAY'S LESSON
Some rights reserved. Check the license terms
here submitted by
livia2lima to
linuxupskillchallenge [link] [comments]
2024.05.14 00:22 Adiventure Fixing/updating firewall/security rules
I've had a Unifi stack for a few years, and in that time my needs have grown and morphed. I'm trying to get things pretty buttoned away now, but I also know I'm generally pretty ignorant, so hopeful for some guidance.
As it is I have 10 networks, each with their own vlan:
Default (this has all my switches, APs and my DNS server)
IOT 1: This has my personal IOT devices
IOT 2: IOT devices that don't belong to me
Trusted 1: My phone/PCs
Trusted 2: Other's phones/PCs
Cameras: My protect cameras
Server: My media/homelab server and a printer (I'm not sure if the printer should maybe be on either an IOT net, or its own)
Secure: Currently nothing, the thought was either VPN out, or trusted devices that I wouldn't want accessing anything local
Guest: Self explanatory
DMZ: Atlas probe
I've got 8 wifi networks:
Trusted 1 2.4/5
Trusted 1 5GHz
Trusted 2 2.4/5
IOT 1 2.4/5GHz
IOT 2 2.4/5GHz
IOT 2 5GHz
Guest (currently disabled) 2.4/5GHz
Cameras 2.4/5GHz
I'd happily simplify the networks through radius or the password choosing the vlan, but I'm sure the best avenue to that, particularly that wouldn't mess things up for older devices.
This is where my ignorance really steps up. Broadly I want the security I can have without screwing up useability.
IoT wise that means I still want Google Home/Chromecast/Alexa to work, ditto Ring/Hubitat/SmartThings/Ecowitt/Hue/Govee/Lutron/whatever I'm missing.
I also want to manage my IoT devices from their matching Trusted network (so Trusted 1 to IoT 1, Trusted 2 to IoT2) as well as Trusted 1 to IoT 2 and IoT 1 to IoT 2. This may be a convoluted mistake. The idea was that I could manage any of the devices from my Trusted net, and my IoT devices could interact with any other IoT but not the other way.
For my server that means being able to stream outside the network, via Plex/Emby (with reverse proxy), and eventually I'll get HomeAssistant and some other toys running on it.
Cameras need connectivity to my UNVR, and I believe possibly my server (for scrypted/Homeassistant) but nothing else.
DMZ/Secure: Fully isolated from anything local, and then whatever the guidance is for a probe.
Even as I typed this I thought 'alright, I can do this' only to look at the new interface on the gateway, and the apparent 102 rules of which most seem to be auto created and just feel my stomach knot up. I think I probably need a very small bit of guidance on unfucking how I'm thinking about it, and a few tips on setup and I can get there, but from the starting point it's daunting.
submitted by
Adiventure to
Ubiquiti [link] [comments]
2024.05.13 23:41 NthBlueDream Facebook Messenger - setting up end-to-end encryption without losing anything
There are many posts about this, but I hope to gather some clear information about this change, as the
Help page is of limited use.
On my phone, the Messenger app is asking me to set up a PIN (or another option) to access chat history across devices.
- I've seen discussions about "secret" chats and "normal" chats merging. Are old conversations "converted", or do they remain as they were? Is "secret" the only option for new chats?
This
official Q&A says that chat history will now be stored on devices (presumably with a knock-on effect on both device storage, and phone backup file size), unless you turn on secure storage, then data remains on Facebook's servers.
- Does using secure storage mean that Low storage mode does not come into play?
I can still use messaging in a desktop web browser as normal at facebook.com. It appears that message history is loaded from the server as I scroll.
- Will I be forced to use end-to-end encryption on this interface at some point?
The choice about how to access chat history - I guess PIN in the best choice if you use Messenger cross-platform. (I note that it is only the key, not the chat history, which is stored with Apple or Google if you choose that option.)
- Those who have set it up - which did you choose and why? I'm not going to worry about why you would choose a 40 character code if you can choose a 6 character PIN...
Grateful for any thoughts on these interrelated questions, do let me know if I've made incorrect assumptions. Main aim is to avoid losing any data. I'm now concerned about deleting the Messenger app in case chat history is lost.
In reading about this I also found out some other things: the website
messenger.com; the Messenger desktop app; there was for a while an app called Messenger Lite; and the fact that messaging has been added back into the main Facebook app having been removed some years ago. So it's all a bit of a mess.
It occurs to me that this implementation is quite different to WhatsApp, which does not retain messages on a server. Facebook bought the former, and at one time announced an intention to merge the platforms, so you might have expected them to switch to the same model.
submitted by
NthBlueDream to
facebook [link] [comments]
2024.05.13 23:17 JayRupp Has anyone had issues using PIA with Steam (e.g. receiving a ban/suspension?)
Topic. I'm a long time PIA user, but I've always been nervous to have it running while Steam is. Considering how often I use both PIA and Steam, it would be much easier on my end if I could just leave them both running. I'm not interested in using PIA to manipulate Steam's regional pricing, and I'd have no problem closing PIA prior to making any purchases. Has anyone ever received a Steam ban or suspension for using PIA? Thanks in advance.
Edit: I tried reaching out to Steam Support directly, and they refuse to give a definitive response. I don't get it. Perhaps they want to retain the option to take action in the future.
Me: I'm not satisfied with the information found in the Steam Subscriber Agreement concerning the usage of VPN's, and I don't want to jeopardize my account. Are Steam users allowed to use VPN's while Steam is running (assuming no purchases are made and/or attempted)?
Support: Steam Support does not offer any information or support for VPN/proxy issues. However, we do advise that you disable or remove it from your computer. Such software is known to cause issues with purchasing through the Steam Store and connecting to the Steam network or game servers.
submitted by
JayRupp to
PrivateInternetAccess [link] [comments]
2024.05.13 23:14 Sauws Retro Gaming verkoophoek Discord Server?
Hallo,
Ik vroeg me af of iemand op de hoogte is over het bestaan van een Discord Server of Reddit groep voor het verkoop van Retro gaming in Benelux?
- SubReddit? - Discord Server? - Facebook groepen? - Signal / Whatsapp / ... groepen
submitted by
Sauws to
belgium [link] [comments]
2024.05.13 23:10 Mental_Act4662 Proxy Portainer through Traefik
Im having some issues setting up Portainer to proxy through Traefik.
Here is my Portainer `docker compose` file.
```
services:
portainer:
image: portaineportainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
networks:
volumes:
- /etc/localtime:/etc/localtime:ro
- /varun/docker.sock:/varun/docker.sock:ro
- /opt/portainedata:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.entrypoints=http"
- "traefik.http.routers.portainer.rule=Host(`portainer.lab.mydomain.com)"
- "traefik.http.routers.portainer.tls=true"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
networks:
proxy:
external: true
```
I can see it deployed in my Traefik dashboard. But when I try to go to `portainer.lab.mydomain.com` I just get an "Internal Server Error"
submitted by
Mental_Act4662 to
selfhosted [link] [comments]
2024.05.13 23:05 Sad-Copy-1728 HOW TO HOST/INTEGRATE chromium/ chrome on aws lambda Node js 18.x puppeteer
I have been trying for the past 3 weeks, and i am getting this error in one way or another. I have created layers etc etc, and done tried many ways, but all went vien ... I need help ..i am converting html to pdf using Puppeteer and sparticuz chromiumNode js 18.x . I. have tried so many ways now i am fed up .. if anyone know please help me ...
submitted by
Sad-Copy-1728 to
node [link] [comments]
2024.05.13 23:04 Sad-Copy-1728 How to host/integrate chromium/chrome with aws lambda NodeJs ...
I have been trying for the past 3 weeks, and i am getting this error in one way or another. I have created layers etc etc, and done tried many ways, but all went vien ... I need help ..i am converting html to pdf using Puppeteer and sparticuz chromium Node js 18.x . I. have tried so many ways now i am fed up .. if anyone know please help me ... .
submitted by
Sad-Copy-1728 to
aws [link] [comments]
http://activeproperty.pl/