Hoa proxy template

Tales from the HOA

2015.02.22 21:43 trailrunn Tales from the HOA

Tell you HOA Horror Stories on our ugly template
[link]


2024.05.15 14:21 Honeysyedseo How I Built a $2,000 per Month Passive Income Rank and Rent Website

Rank and Rent website making $2,000 per month.
This website runs on autopilot.
I didn't touch this website for 3 months and it is making $$$.
Around 30-40 leads per month.
Ranking the main keywords on SERPs and maps.
Here is what we did:

Set up a WordPress website

Content is the king

Set up GMB

GMB-verified

Social media and citations

Iterate the content

Internal link building

PR

CTR

Reviews, Reviews, Reviews

(You can save this or ask me any question)
Source
submitted by Honeysyedseo to pSEOnewsletter [link] [comments]


2024.05.15 11:14 Character_Ask8343 Nginx Proxy Manager not secured in EKS

Hi everyone,
I'm currently deploying an application on Amazon EKS and using Nginx Proxy Manager to manage my proxy configurations. However, I've encountered an issue where my application is not showing as secured (no HTTPS).
Here's my setup:
I've followed the standard setup procedures, but my application still doesn't show as secured when accessed via the browser.
Can anyone provide guidance on what might be causing this issue or what additional steps I might need to take to ensure my application is secured properly?
Do i need to use custom SSL? If that so, which path need to insert custom SSL? Or what did i miss?
Thanks in advance for your help!
Below are my manifest:
#! Client Ingress --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-proxy-manager-ingress namespace: dev annotations: nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-origin: '*' nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: ingressClassName: nginx tls: - hosts: - np-nginx-manager-xxx.com secretName: xxxx rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx-proxy-manager-service port: number: 81 # - path: / # pathType: Prefix # backend: # service: # name: nginx-proxy-manager-service # port: # number: 80 # Deployment --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-proxy-manager-deployment labels: name: nginx-proxy-manager-deployment namespace: dev spec: replicas: 1 selector: matchLabels: app: nginx-proxy-manager template: metadata: labels: app: nginx-proxy-manager spec: nodeSelector: Type: default SubnetType: xx RunApp: xx Env: xx containers: - name: nginx-proxy-manager-deployment image: jc21/nginx-proxy-manager:latest imagePullPolicy: Always ports: - containerPort: 80 - containerPort: 81 - containerPort: 443 volumeMounts: - name: letsencrypt mountPath: /etc/letsencrypt - name: data mountPath: /data resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 100m memory: 100Mi volumes: - name: letsencrypt - name: data # Service --- apiVersion: v1 kind: Service metadata: name: nginx-proxy-manager-service labels: name: nginx-proxy-manager-service namespace: dev spec: ports: - name: web-ui port: 81 targetPort: 81 protocol: TCP - name: http-port port: 80 targetPort: 80 protocol: TCP - name: https-port port: 443 targetPort: 443 protocol: TCP selector: app: nginx-proxy-manager 
submitted by Character_Ask8343 to nginxproxymanager [link] [comments]


2024.05.14 17:30 ExaminationOdd8421 user_proxy.initiate_chat summary_args

I created an agent that given a query it searches on the web using BING and then using APIFY scraper it scrapes the first posts. For each post I want a summary using summary_args but I have a couple of questions:
  1. Is there a limit on how many things can we have with the summary_args? When I add more things I get: Given the structure you've requested, it's important to note that the provided Reddit scrape results do not directly offer all the detailed information for each field in the template. However, I'll construct a summary based on the available data for one of the URLs as an example. For a comprehensive analysis, each URL would need to be individually assessed with this template in mind. (I want all of the URLs but it only outputs one)
  2. Is there a way to store locally the summary_args? Any suggestions?
    chat_result = user_proxy.initiate_chat( manager, message="Search the web for information about Deere vs Bobcat on reddit,scrape them and summarize in detail these results.", summary_method="reflection_with_llm", summary_args={ "summary_prompt": """Summarize for each scraped reddit content and format summary as EXACTLY as follows: data = { URL: url used, Date Published: date of post or comment, Title: title of post, Models: what specific models are mentioned?, ... (15 more things)... } """
Thanks!!!
submitted by ExaminationOdd8421 to AutoGenAI [link] [comments]


2024.05.14 12:21 rweninger Nextcloud Upgrade fron chart version 1.6.61 to 2.0.5 failed

I am not sure if I want to solve this issue actually, I just want to vent.
iX, what do you think yourself when you print out this error message to a "customer"?
I mean your installation of Kubernetes on a single host is crap and using helm charts that utterly break in an atomic chain reaction that way doesnt make it trustworthy. I am on the way to migrate nextcloud away again from TrueNAS to a docker host and just use TrueNAS as storage.
I dont care about sensible data down there, at the time of posting, this system isnt running anymore. Sorry if I annoy somebody.
[EFAULT] Failed to upgrade App: WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/ranchek3s/k3s.yaml Error: UPGRADE FAILED: execution error at (nextcloud/templates/common.yaml:38:4): Chart - Values contain an error that may be a result of merging. Values containing the error: Error: 'error converting YAML to JSON: yaml: invalid leading UTF-8 octet' TZ: UTC bashImage: pullPolicy: IfNotPresent repository: bash tag: 4.4.23 configmap: nextcloud-config: data: limitrequestbody.conf: LimitRequestBody 3221225472 occ: - #!/bin/bash uid="$(id -u)" gid="$(id -g)" if [ "$uid" = '0' ]; then user='www-data' group='www-data' else user="$uid" group="$gid" fi run_as() { if [ "$(id -u)" = 0 ]; then su -p "$user" -s /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" else /bin/bash -c 'php /vawww/html/occ "$@"' - "$@" fi } run_as "$@" opcache.ini: opcache.memory_consumption=128 php.ini: max_execution_time=30 enabled: true nginx: data: nginx.conf: - events {} http { server { listen 9002 ssl http2; listen [::]:9002 ssl http2; # Redirect HTTP to HTTPS error_page 497 301 =307 https://$host$request_uri; ssl_certificate '/etc/nginx-certs/public.crt'; ssl_certificate_key '/etc/nginx-certs/private.key'; client_max_body_size 3G; add_header Strict-Transport-Security "max-age=15552000; includeSubDomains; preload" always; location = /robots.txt { allow all; log_not_found off; access_log off; } location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } location / { proxy_pass http://nextcloud:80; proxy_http_version 1.1; proxy_cache_bypass $http_upgrade; proxy_request_buffering off; # Proxy headers proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port 443; # Proxy timeouts proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; } } } enabled: true fallbackDefaults: accessModes: - ReadWriteOnce persistenceType: emptyDir probeTimeouts: liveness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readiness: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 2 timeoutSeconds: 5 startup: failureThreshold: 60 initialDelaySeconds: 10 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 probeType: http pvcRetain: false pvcSize: 1Gi serviceProtocol: tcp serviceType: ClusterIP storageClass: "" global: annotations: {} ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 labels: {} minNodePort: 9000 image: pullPolicy: IfNotPresent repository: nextcloud tag: 29.0.0 imagePullSecret: [] ixCertificateAuthorities: {} ixCertificates: "1": CA_type_existing: false CA_type_intermediate: false CA_type_internal: false CSR: null DN: /C=US/O=iXsystems/CN=localhost/emailAddress=info@ixsystems.com/ST=Tennessee/L=Maryville/subjectAltName=DNS:localhost can_be_revoked: false cert_type: CERTIFICATE cert_type_CSR: false cert_type_existing: true cert_type_internal: false certificate: -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- certificate_path: /etc/certificates/truenas_default.crt chain: false chain_list: - -----BEGIN CERTIFICATE----- MIIDrTCCApWgAwIBAgIEHHHd+zANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC VVMxEjAQBgNVBAoMCWlYc3lzdGVtczESMBAGA1UEAwwJbG9jYWxob3N0MSEwHwYJ KoZIhvcNAQkBFhJpbmZvQGl4c3lzdGVtcy5jb20xEjAQBgNVBAgMCVRlbm5lc3Nl ZTESMBAGA1UEBwwJTWFyeXZpbGxlMB4XDTIzMTIxNjA3MDUwOVoXDTI1MDExNjA3 MDUwOVowgYAxCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlpWHN5c3RlbXMxEjAQBgNV BAMMCWxvY2FsaG9zdDEhMB8GCSqGSIb3DQEJARYSaW5mb0BpeHN5c3RlbXMuY29t MRIwEAYDVQQIDAlUZW5uZXNzZWUxEjAQBgNVBAcMCU1hcnl2aWxsZTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKPRN3n5ngKFrHQ12gKCmLEN85If6B3E KEo4nvTkTIWLzXZcTGxlJ9kGr9bt0V8cvEInZnOCnyY74lzKlMhZv1R58nfBmz5a gpV6scHXZVghGhGsjtP7/H4PRMUbzM9MawET8+Au8grjAodUkz6Jskcwhgg9EVS5 UQPTDkxXJYFRUN1XhJOR4tqsrHFrI25oUF6Gms9Wp1aq0mJXh+FIGAyELqpdk/Q8 N1Rjn3t4m2Ub+OPmBLwHOncIqz2PHVgL574bT/q+Lc3Mi/gQsfNi6VN7UkNTQ5Q2 uOhrcw4gtjn41v0j7k9CsUvPK8zfCizQHgBx6Ih33Z850pHUQyNuwjECAwEAAaMt MCswFAYDVR0RBA0wC4IJbG9jYWxob3N0MBMGA1UdJQQMMAoGCCsGAQUFBwMBMA0G CSqGSIb3DQEBCwUAA4IBAQAQG2KsF6ki8dooaaM+32APHJp38LEmLNIMdnIlCHPw RnQ+4I8ssEPKk3czIzOlOe6R3V71GWg1JlGEuUD6M3rPbzSfWzv0kdji/qgzUId1 oh9vEao+ndPijYpDi6CUcBADuzilcygSBl05j6RlS2Uv8+tNIjxTKrDegyaEtC3W RoVqON0vhDSKJ3OsOKR2g5uFfs/uHxBvskkChdGn/1aRz+DdHCYVOEavnQylXPBk xzWQDVt6+6mAhejGGkkGsIG1QY7pFpQPA9UWeY/C/3/QdSl01GgfpyWNsfE+Wu1b IS3wxfWfuiMiDbUElqjDqiy623peeVFXrWlTV4G4yBG/ -----END CERTIFICATE----- city: Maryville common: localhost country: US csr_path: /etc/certificates/truenas_default.csr digest_algorithm: SHA256 email: info@ixsystems.com expired: false extensions: ExtendedKeyUsage: TLS Web Server Authentication SubjectAltName: DNS:localhost fingerprint: 8E:68:9D:0A:7D:A6:41:11:59:B0:0C:01:8C:AC:C4:F4:DB:F9:6B:2C from: Sat Dec 16 08:05:09 2023 id: 1 internal: "NO" issuer: external key_length: 2048 key_type: RSA lifetime: 397 name: truenas_default organization: iXsystems organizational_unit: null parsed: true privatekey: -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCj0Td5+Z4Chax0 NdoCgpixDfOSH+gdxChKOJ705EyFi812XExsZSfZBq/W7dFfHLxCJ2Zzgp8mO+Jc ypTIWb9UefJ3wZs+WoKVerHB12VYIRoRrI7T+/x+D0TFG8zPTGsBE/PgLvIK4wKH VJM+ibJHMIYIPRFUuVED0w5MVyWBUVDdV4STkeLarKxxayNuaFBehprPVqdWqtJi V4fhSBgMhC6qXZP0PDdUY597eJtlG/jj5gS8Bzp3CKs9jx1YC+e+G0/6vi3NzIv4 ELHzYulTe1JDU0OUNrjoa3MOILY5+Nb9I+5PQrFLzyvM3wos0B4AceiId92fOdKR 1EMjbsIxAgMBAAECggEAS/Su51RxCjRWwM9TVUSebcHNRNyccGjKUZetRFkyjd1D l/S1zrCcaElscJh2MsaNF5NTMo3HIyAzFdksYTUTvKSKYzKWu7OVxp9MGle3+sPm ZXmABBRbf0uvFEGOljOVjbtloXXC7n9RZdQ2LZIE4nNCQkGmboU6Zi6O+6CQmEOQ 9iyYJ8NyXtjDT2sVOpysAj3ga6tdtSosG7SQuo41t20mw6hbl08LhQP9LfZJyKCR 0x1cYny+XHifB6JQAt8crzHYpKaJc2tZd4dXJ1xDnm2Aa/Au5uEA01P/L3hf41sI cUmBhVf1z5m9yBsyaZnW6LzaR5tQwpnPWPEcNfuwLQKBgQDM1o8vwKCo435shpGE zCdqbvK4+J0XYmbgEwHId8xr9rzZ852lAhs6VO2WQQVMGUoWRaH44B3z1Jv9N5Qa 4RUwnTb1MERfzEjRwUuIWjtz34yAXko0iU3M0FYpIxDuKVJNOEO1Doey0lTUcIYQ sfRUVxxJZ3hpDo7RhPSZpwyBtwKBgQDMu8PFVQ5XRb90qaGqg+ACaXMfHXfuWzuJ UqgyNrvF6wqd9Z0Nn299m7EonE6qJftUqlqHC62OCBfqRBNkwOw40s7ORZvqUCkP 7WsWuJu4HqhS2we8yKRuqj520VP537ZeqnK64mDxDKBvL9ttCujbxy01WFWcdwkO sSAViAK7VwKBgQCAeNG1kYsyYfyY9I2wTJssFgoGGWftkroTL9iecwSzcj1gNXta Usfg/gNFieJYqEPfVC0Sev5OP7rWRlWNxj4UD4a4oV1A+E9zv1gwXOeM9ViZ6omA Cd3R55kik+u6dBA6fl9433Qco+6wjyKGthYYD8qd/1d2DLtmjY0cEbm2YQKBgH4/ Zuifm5lLhFVPaUa5zYAPQJM2W8da8OqsUtWsFLxmRQTE+ZT19Q1S3br6MDQR+drq tapDFEHaUcz/L6pYoRIlRKvEFvI1fiy5Lekz66ptFUUKlcnfPC6VwrEIQi16u33C w77ka/0Y2THXJAsoyBEG0KTtlNVIPgiWRv+gAHc/AoGATOlO6ZVhf0vWPIKBhajM ijWTNIX/iCNOheJEjLEPksG4LVpU16OphZL2m0nIyOryQ0Fmt7GHUfl3CXFhTH/P G47PzH+mLCQLp5TUIeNRQWScWNGGsf9J+MtwpxHMzUymDJySR4aot0bH3fge0MO1 QccFxNbLODRmJuYbSQB1HZQ= -----END PRIVATE KEY----- privatekey_path: /etc/certificates/truenas_default.key revoked: false revoked_date: null root_path: /etc/certificates san: - DNS:localhost serial: 477224443 signedby: null state: Tennessee subject_name_hash: 3193428416 type: 8 until: Thu Jan 16 08:05:09 2025 ixChartContext: addNvidiaRuntimeClass: false hasNFSCSI: true hasSMBCSI: true isInstall: false isStopped: false isUpdate: false isUpgrade: true kubernetes_config: cluster_cidr: 172.16.0.0/16 cluster_dns_ip: 172.17.0.10 service_cidr: 172.17.0.0/16 nfsProvisioner: nfs.csi.k8s.io nvidiaRuntimeClassName: nvidia operation: UPGRADE smbProvisioner: smb.csi.k8s.io storageClassName: ix-storage-class-nextcloud upgradeMetadata: newChartVersion: 2.0.5 oldChartVersion: 1.6.61 preUpgradeRevision: 89 ixExternalInterfacesConfiguration: [] ixExternalInterfacesConfigurationNames: [] ixVolumes: - hostPath: /mnt/Camelot/ix-applications/releases/nextcloud/volumes/ix_volumes/ix-postgres_backups mariadbImage: pullPolicy: IfNotPresent repository: mariadb tag: 10.6.14 ncConfig: additionalEnvs: [] adminPassword: d3k@M%YRBRcj adminUser: admin commands: [] cron: enabled: false schedule: '*/15 * * * *' dataDir: /vawww/html/data host: charon.weninger.local maxExecutionTime: 30 maxUploadLimit: 3 opCacheMemoryConsumption: 128 phpMemoryLimit: 512 ncDbHost: nextcloud-postgres ncDbName: nextcloud ncDbPass: XvgIoT84hMmNDlH ncDbUser: ��-��� ncNetwork: certificateID: 1 nginx: externalAccessPort: 443 proxyTimeouts: 60 useDifferentAccessPort: false webPort: 9002 ncPostgresImage: pullPolicy: IfNotPresent repository: postgres tag: "13.1" ncStorage: additionalStorages: [] data: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: data type: hostPath html: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata ixVolumeConfig: datasetName: html type: hostPath isDataInTheSameVolume: true migrationFixed: true pgBackup: ixVolumeConfig: aclEnable: false datasetName: ix-postgres_backups type: ixVolume pgData: hostPathConfig: aclEnable: false hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata ixVolumeConfig: datasetName: pgData type: hostPath nginxImage: pullPolicy: IfNotPresent repository: nginx tag: 1.25.4 notes: custom: ## Database You can connect to the database using the pgAdmin App from the catalog
Database Details
- Database: \{{ .Values.ncDbName }}` - Username: `{{ .Values.ncDbUser }}` - Password: `{{ .Values.ncDbPass }}` - Host: `{{ .Values.ncDbHost }}.{{ .Release.Namespace }}.svc.cluster.local` - Port: `5432``
{{- $_ := unset .Values "ncDbUser" }} {{- $_ := unset .Values "ncDbName" }} {{- $_ := unset .Values "ncDbPass" }} {{- $_ := unset .Values "ncDbHost" }} Note: Nextcloud will create an additional new user and password for the admin user on first startup. You can find those credentials in the \/vawww/html/config/config.php` file inside the container. footer: # Documentation Documentation for this app can be found at https://www.truenas.com/docs. # Bug reports If you find a bug in this app, please file an issue at https://ixsystems.atlassian.net header: # Welcome to TrueNAS SCALE Thank you for installing {{ .Chart.Annotations.title }} App. persistence: config: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/config subPath: config nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/config subPath: config type: hostPath username: null customapps: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/customapps subPath: custom_apps nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/custom_apps subPath: custom_apps type: hostPath username: null data: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/data subPath: data nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/data subPath: data type: hostPath username: null html: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html subPath: html nextcloud-cron: nextcloud-cron: mountPath: /vawww/html subPath: html postgresbackup: postgresbackup: mountPath: /nc-config type: hostPath username: null nc-config-limreqbody: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /etc/apache2/conf-enabled/limitrequestbody.conf subPath: limitrequestbody.conf type: configmap nc-config-opcache: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/opcache-z-99.ini subPath: opcache.ini type: configmap nc-config-php: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /uslocal/etc/php/conf.d/nextcloud-z-99.ini subPath: php.ini type: configmap nc-occ: defaultMode: "0755" enabled: true objectName: nextcloud-config targetSelector: nextcloud: nextcloud: mountPath: /usbin/occ subPath: occ type: configmap nginx-cert: defaultMode: "0600" enabled: true items: - key: tls.key path: private.key - key: tls.crt path: public.crt objectName: nextcloud-cert targetSelector: nginx: nginx: mountPath: /etc/nginx-certs readOnly: true type: secret nginx-conf: defaultMode: "0600" enabled: true items: - key: nginx.conf path: nginx.conf objectName: nginx targetSelector: nginx: nginx: mountPath: /etc/nginx readOnly: true type: configmap postgresbackup: datasetName: ix-postgres_backups domain: null enabled: true hostPath: null medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgresbackup: permissions: mountPath: /mnt/directories/postgres_backup postgresbackup: mountPath: /postgres_backup type: ixVolume username: null postgresdata: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/pgdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: postgres: permissions: mountPath: /mnt/directories/postgres_data postgres: mountPath: /valib/postgresql/data type: hostPath username: null themes: datasetName: null domain: null enabled: true hostPath: /mnt/Camelot/Applications/Nextcloud/ncdata medium: null password: null readOnly: false server: null share: null size: null targetSelector: nextcloud: nextcloud: mountPath: /vawww/html/themes subPath: themes nextcloud-cron: nextcloud-cron: mountPath: /vawww/html/themes subPath: themes type: hostPath username: null tmp: enabled: true targetSelector: nextcloud: nextcloud: mountPath: /tmp type: emptyDir podOptions: automountServiceAccountToken: false dnsConfig: options: [] dnsPolicy: ClusterFirst enableServiceLinks: false hostAliases: [] hostNetwork: false restartPolicy: Always runtimeClassName: "" terminationGracePeriodSeconds: 30 tolerations: [] portal: {} postgresImage: pullPolicy: IfNotPresent repository: postgres tag: "15.2" rbac: {} redisImage: pullPolicy: IfNotPresent repository: bitnami/redis tag: 7.0.11 release_name: nextcloud resources: NVIDIA_CAPS: - all limits: cpu: 4000m memory: 8Gi requests: cpu: 10m memory: 50Mi scaleCertificate: nextcloud-cert: enabled: true id: 1 scaleExternalInterface: [] scaleGPU: [] secret: {} securityContext: container: PUID: 568 UMASK: "002" allowPrivilegeEscalation: false capabilities: add: [] drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 568 runAsNonRoot: true runAsUser: 568 seccompProfile: type: RuntimeDefault pod: fsGroup: 568 fsGroupChangePolicy: OnRootMismatch supplementalGroups: [] sysctls: [] service: nextcloud: enabled: true ports: webui: enabled: true port: 80 primary: true targetPort: 80 targetSelector: nextcloud primary: true targetSelector: nextcloud type: ClusterIP nextcloud-nginx: enabled: true ports: webui-tls: enabled: true nodePort: 9002 port: 9002 targetPort: 9002 targetSelector: nginx targetSelector: nginx type: NodePort postgres: enabled: true ports: postgres: enabled: true port: 5432 primary: true targetPort: 5432 targetSelector: postgres targetSelector: postgres type: ClusterIP redis: enabled: true ports: redis: enabled: true port: 6379 primary: true targetPort: 6379 targetSelector: redis targetSelector: redis type: ClusterIP serviceAccount: {} workload: nextcloud: enabled: true podSpec: containers: nextcloud: enabled: true envFrom: - secretRef: name: nextcloud-creds imageSelector: image lifecycle: postStart: command: - /bin/sh - -c - echo "Installing ..." apt update && apt install -y --no-install-recommends \ echo "Failed to install binary/binaries..." echo "Finished." type: exec primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 80 type: http securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: postgres-wait: args: - -c - echo "Waiting for postgres to be ready" until pg_isready -h ${POSTGRES_HOST} -U ${POSTGRES_USER} -d ${POSTGRES_DB}; do sleep 2 done command: bash enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: postgresImage resources: limits: cpu: 500m memory: 256Mi type: init redis-wait: args: - -c - - echo "Waiting for redis to be ready" until redis-cli -h "$REDIS_HOST" -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG; do echo "Waiting for redis to be ready. Sleeping 2 seconds..." sleep 2 done echo "Redis is ready!" command: bash enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage resources: limits: cpu: 500m memory: 256Mi type: init securityContext: fsGroup: 33 primary: true type: Deployment nginx: enabled: true podSpec: containers: nginx: enabled: true imageSelector: nginxImage primary: true probes: liveness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https readiness: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https startup: enabled: true httpHeaders: Host: localhost path: /status.php port: 9002 type: https securityContext: capabilities: add: - CHOWN - DAC_OVERRIDE - FOWNER - NET_BIND_SERVICE - NET_RAW - SETGID - SETUID readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 hostNetwork: false initContainers: 01-wait-server: args: - -c - - echo "Waiting for [http://nextcloud:80]"; until wget --spider --quiet --timeout=3 --tries=1 http://nextcloud:80/status.php; do echo "Waiting for [http://nextcloud:80]"; sleep 2; done echo "Nextcloud is up: http://nextcloud:80"; command: - bash enabled: true imageSelector: bashImage type: init type: Deployment postgres: enabled: true podSpec: containers: postgres: enabled: true envFrom: - secretRef: name: postgres-creds imageSelector: ncPostgresImage primary: true probes: liveness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec readiness: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec startup: command: - sh - -c - until pg_isready -U ${POSTGRES_USER} -h localhost; do sleep 2; done enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n fix_owner=\"true\"\n fix_perms=\"true\"\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n \ chown -R 999:999 \"$dir\"\n echo \"Finished changing ownership\"\n \ echo \"Ownership after changes:\"\n stat -c \"%u %g\" \"$dir\"\n \ fi\ndone\n" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: install type: Deployment postgresbackup: annotations: helm.sh/hook: pre-upgrade helm.sh/hook-delete-policy: hook-succeeded helm.sh/hook-weight: "1" enabled: true podSpec: containers: postgresbackup: command: - sh - -c - echo 'Fetching password from config.php' # sed removes ' , => spaces and db from the string POSTGRES_USER=$(cat /nc-config/config/config.php grep 'dbuser' sed "s/dbuser ',=>//g") POSTGRES_PASSWORD=$(cat /nc-config/config/config.php grep 'dbpassword' sed "s/dbpassword ',=>//g") POSTGRES_DB=$(cat /nc-config/config/config.php grep 'dbname' sed "s/dbname ',=>//g") [ -n "$POSTGRES_USER" ] && [ -n "$POSTGRES_PASSWORD" ] && [ -n "$POSTGRES_DB" ] && echo 'User, Database and password fetched from config.php' until pg_isready -U ${POSTGRES_USER} -h ${POSTGRES_HOST}; do sleep 2; done echo "Creating backup of ${POSTGRES_DB} database" pg_dump --dbname=${POSTGRES_URL} --file /postgres_backup/${POSTGRES_DB}$(date +%Y-%m-%d_%H-%M-%S).sql echo "Failed to create backup" echo "Backup finished" enabled: true envFrom: - secretRef: name: postgres-backup-creds imageSelector: ncPostgresImage primary: true probes: liveness: enabled: false readiness: enabled: false startup: enabled: false resources: limits: cpu: 2000m memory: 2Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 999 runAsUser: 999 initContainers: permissions: args: - -c - "for dir in /mnt/directories/*; do\n if [ ! -d \"$dir\" ]; then\n echo \"[$dir] is not a directory, skipping\"\n continue\n fi\n\n echo \"Current Ownership and Permissions on [\"$dir\"]:\"\n echo \"chown: $(stat -c \"%u %g\" \"$dir\")\"\n echo \"chmod: $(stat -c \"%a\" \"$dir\")\" \n if [ $(stat -c %u \"$dir\") -eq 999 ] && [ $(stat -c %g \"$dir\") -eq 999 ]; then\n echo \"Ownership is correct. Skipping...\"\n fix_owner=\"false\"\n \ else\n echo \"Ownership is incorrect. Fixing...\"\n fix_owner=\"true\"\n \ fi\n\n\n if [ \"$fix_owner\" = \"true\" ]; then\n echo \"Changing ownership to 999:999 on: [\"$dir\"]\"\n chown -R 999:999 \"$dir\"\n \ echo \"Finished changing ownership\"\n echo \"Ownership after changes:\"\n \ stat -c \"%u %g\" \"$dir\"\n fi\ndone" command: bash enabled: true imageSelector: bashImage resources: limits: cpu: 1000m memory: 512Mi securityContext: capabilities: add: - CHOWN readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 0 type: init restartPolicy: Never securityContext: fsGroup: "33" type: Job redis: enabled: true podSpec: containers: redis: enabled: true envFrom: - secretRef: name: redis-creds imageSelector: redisImage primary: true probes: liveness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec readiness: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec startup: command: - /bin/sh - -c - redis-cli -a "$REDIS_PASSWORD" -p ${REDIS_PORT_NUMBER:-6379} ping grep -q PONG enabled: true type: exec resources: limits: cpu: 4000m memory: 8Gi securityContext: readOnlyRootFilesystem: false runAsGroup: 0 runAsNonRoot: false runAsUser: 1001 securityContext: fsGroup: 1001 type: Deployment See error above values.`
submitted by rweninger to truenas [link] [comments]


2024.05.14 06:38 ailm32442 Integrate GPT-4o into comfyui to achieve LLM visual functions!

Integrate GPT-4o into comfyui to achieve LLM visual functions!
GPT-4o has been released, and I’m joining the excitement by enabling my comfyui agent open-source project to support GPT-4o integration into comfyui, achieving visual functions.
The project address is:heshengtao/comfyui_LLM_party: A set of block-based LLM agent node libraries designed for ComfyUI development.(一组面向comfyui开发的积木化LLM智能体节点库)
In my open-source project, you can use these features:
  1. You can right-click in the comfyui interface, select `llm` from the context menu, and you will find the nodes for this project. [how to use nodes](how_to_use_nodes.md)
  2. Supports API integration or local large model integration. Modular implementation for tool invocation.When entering the base_url, please use a URL that ends with `/v1/`.You can use [ollama](https://github.com/ollama/ollama) to manage your model. Then, enter `http://localhost:11434/v1/` for the base_url, ollama for the api_key, and your model name for the model_name, such as: llama3. If the call fails with a 503 error, you can try turning off the proxy server.
  3. Local knowledge base integration with RAG support.
  4. Ability to invoke code interpreters.
  5. Enables online queries, including Google search support.
  6. Implement conditional statements within ComfyUI to categorize user queries and provide targeted responses.
  7. Supports looping links for large models, allowing two large models to engage in debates.
  8. Attach any persona mask, customize prompt templates.
  9. Supports various tool invocations, including weather lookup, time lookup, knowledge base, code execution, web search, and single-page search.
  10. Use LLM as a tool node.
  11. Rapidly develop your own web applications using API + Streamlit.The picture below is an example of a drawing application.
  12. Added a dangerous omnipotent interpreter node that allows the large model to perform any task.
  13. It is recommended to use the `show_text` node under the `function` submenu of the right-click menu as the display output for the LLM node.
https://preview.redd.it/5qlvjmaiob0d1.png?width=2100&format=png&auto=webp&s=5c04d31f6684d24da7729ed6771835f76ace78e9
submitted by ailm32442 to comfyui [link] [comments]


2024.05.13 17:49 MaHcIn Can't get my SwiftUI view to update

Hi,
I have a SwiftUI view like this:
ScrollViewReader { proxy in ScrollView { VStack(spacing: 0) { ForEach(viewModel.timelineRows) { timelineRow in switch (timelineRow) { case .Data(let calculation): let contentView = ContentView( // This is the view that won't update type: type, calculations: calculation, showBalanceForToday: viewModel.showBalanceOnToday, responseBalanceType: viewModel.responseBalanceType ) let timelineRowView = TimelineRowView( date: calculation.date!, contentView: contentView, showRightArrow: viewModel.canNavigateToHistory ) .id(timelineRow.isToday ? "today" : timelineRow.id) if viewModel.canNavigateToHistory { NavigationLink { HistoryViewController.SwiftUI(date: calculation.date!) } label: { timelineRowView } } else { timelineRowView } case .Separator(let date): TimelineMonthSeparatorView(text: date.formattedText(format: "MMM. yyyy")) } } } } .onAppear { proxy.scrollTo("today", anchor: .center) TimelinesHostingViewController.ViewMediator.shared.onTodayButtonPressed = { withAnimation { proxy.scrollTo("today", anchor: .center) } } } } 
The `ContentView` is quite simple, it just has some logic that decides which struct to return depending on the parameters:
struct ContentView: View { var type: TimelineType var calculations: UserCalculations.DailyCalculation var showBalanceForToday: Bool var responseBalanceType: SystemSettings.ResponseBalanceType var body: some View { ZStack { let isCalculationTodayOrEarlier = calculations.date?.isBeforeDate(DateInRegion(region: .current) + 1.days, granularity: .day) == true if type == .Balances && isCalculationTodayOrEarlier { let shouldShowBalance = == false showBalanceForToday BalanceView(showBalances: shouldShowBalance, responseBalanceType: responseBalanceType, runningValue: Double(calculations.calculationResultSummary!.runningBalanceValue!), dailyValue: Double(calculations.calculationResultSummary!.dailyBalanceValue!)) } else if type == .Schedule { ScheduleView(timePolicy: calculations.timePolicy, absence: calculations.absences?.first) } EmptyView() } } }calculations.date?.isToday 
The field that's changing at runtime is `showBalanceForToday` and it resides in `ViewModel` like this:
u/Published var showBalanceOnToday = false init() { ClockStatusManager.shared.$clockStatus.map { status in status?.status?.isClockedIn == true } .assign(to: \.showBalanceOnToday, on: self) .store(in: &cancelBag) } 
I double checked that the ClockStatus value gets propagated to the `showBalanceOnToday`, however, the SwiftUI view who utilises this Published variable still doesn't get updated.
What could be the issue here? My only suspicion is that the Views inside `ForEach` loop won't get updated unless the ForEach parameter itself gets modified, but that wouldn't make any sense from coding perspective since other independent state variables could make their way into the ForEach block.
Edit: Per request, adding the TimelineRowView code:
struct TimelineRowView: View { var date: DateInRegion var contentView: Content var showRightArrow: Bool var body: some View { ZStack { // Background date.isToday ? Color("TimelineRowHighlight") : Color("List") // Content HStack { // Day VStack { Group { Text(String(date.day)) .font(Font.system(size: 24)) Text(date.date.formattedText(format: "EEEE")) .font(Font.system(size: 9)) } .foregroundColor(date.weekday == 1 ? Color("Red") : Color("TextBlack")) }.frame(width: 64) // Content View Spacer() contentView Spacer() // Arrow if showRightArrow { Image("arrow_right") .renderingMode(.template) .foregroundColor(Color("TextLight")) .padding(.horizontal, 16) } } // Divider VStack { Spacer() Divider() } } .frame(height: 56) } } 
Edit 2: Adding ViewModel
class TimelineViewModel: ObservableObject { @Published var timelineRows: [TimelineRow] = [] @Published var error: NetworkError? @Published var showBalanceOnToday = false private let user = LoginManager.shared.user private let client = StatisticsClient() private let systemSettings = SystemSettingsRepository.shared.systemSettings private var disposeBag = DisposeBag() private var cancelBag = Set() var responseBalanceType: SystemSettings.ResponseBalanceType { return systemSettings.responseBalanceType } var canNavigateToHistory: Bool { return user?.canViewHistory == true } var isUserClockedIn: Bool { return ClockStatusManager().clockStatus?.status?.isClockedIn == true } init() { ClockStatusManager.shared.$clockStatus.map { status in status?.status?.isClockedIn == true } .assign(to: \.showBalanceOnToday, on: self) .store(in: &cancelBag) } func loadTimeline() { error = nil let from = (Date() - 1.months).dateAtStartOf(.month) let to = (Date() + 1.months).dateAtEndOf(.month) client.getLatestUserCalculations(fromDate: from, toDate: to)? .subscribe(on: SerialDispatchQueueScheduler(qos: .background)) .observe(on: MainScheduler.instance) .subscribe { (response: ApiResponse) in if let error = response.error { self.error = error } else if let calculations = response.responseObject { self.timelineRows = self.calculateTimelineRows(calculations: calculations.dailyCalculations) } }.disposed(by: disposeBag) } private func calculateTimelineRows(calculations: [UserCalculations.DailyCalculation]?) -> [TimelineRow] { let sorted = calculations?.sorted(by: { $0.date!.isBeforeDate($1.date!, granularity: .minute) }) var rows: [TimelineRow] = [] for calculation in sorted ?? [] { if calculation.date?.day == 1 { rows.append(TimelineRow.Separator(date: calculation.date!.date)) } rows.append(TimelineRow.Data(dailyCalculation: calculation)) } return rows } } 
submitted by MaHcIn to iOSProgramming [link] [comments]


2024.05.13 15:24 twentyfifth38 Finally took some time to learn and configure Obsidian

Finally took some time to learn and configure Obsidian submitted by twentyfifth38 to engagearticle [link] [comments]


2024.05.13 14:41 b4nerj3e Vhosts in the same subfolder of /var/www/html not working in wordpress

Hi, I am trying to migrate my wordpress multisite to Kubernetes, and I am having problems importing the files.
On my current server, I use apache and all domains in the vhost config points to the path /vawww/html/web.
However in kubernetes I can't get wordpress to use that path, it always uses /vawww/html, so it doesn't read my files on this folder.
I'm sure this should be easy to fix, but I can't find a way.
I attach my configuration for nginx ingress and wordpress app, because I am not clear if I have to configure it in both sites or only in one.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: wordpress annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/cluster-issuer: "wp-prod-issuer" spec: rules: - host: www.mysite.com http: paths: - path: "/" pathType: Prefix backend: service: name: wordpress port: number: 80 - path: - host: myothersite.com http: paths: - path: "/" pathType: Prefix backend: service: name: wordpress port: number: 80 tls: - hosts: - www.mysite.com - myothersite.com secretName: wordpress-tls apiVersion: v1 kind: Service metadata: name: wordpress spec: ports: - port: 80 selector: app: wordpress tier: web type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress spec: selector: matchLabels: app: wordpress tier: web strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: wordpress tier: web spec: containers: - image: wordpress:php8.1 name: wordpress workingDir: /vawww/html/web env: - name: WORDPRESS_DB_HOST value: mysql-wp:3306 - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-user-password-8gh42ctd9d key: passworduser - name: WORDPRESS_DB_USER valueFrom: secretKeyRef: name: mysql-user-ft772h9b89 key: username - name: WORDPRESS_DB_NAME valueFrom: secretKeyRef: name: mysql-database-86m8k7bm58 key: database lifecycle: postStart: exec: # command: ["/bin/bash", -c, "chown -R www-data:www-data /vawww/html; chmod -R 774 /vawww/html"] command: - /bin/sh - -c - a2enmod actions allowmethods auth_digest authn_anon authn_socache authz_dbd authz_dbm authz_groupfile cache cache_disk data dbd echo ext_filter headers include info mime_magic mime slotmem_plain slotmem_shm socache_dbm socache_memcache socache_shmcb substitute suexec unique_id userdir vhost_alias dav dav_fs dav_lock lua mpm_prefork proxy lbmethod_bybusyness lbmethod_byrequests lbmethod_bytraffic lbmethod_heartbeat proxy_ajp proxy_balancer proxy_connect proxy_express proxy_fcgi proxy_fdpass proxy_ftp proxy_http proxy_scgi proxy_wstunnel ssl cgi ports: - containerPort: 80 name: wordpress volumeMounts: - name: persistent-storage mountPath: /vawww/html subPath: web - name: config-volume-1 mountPath: /etc/apache2/apache2.conf subPath: apache2.conf volumes: - name: persistent-storage persistentVolumeClaim: claimName: wordpress - name: config-volume-1 configMap: name: apache2conf 
Thank you very much in advance.
submitted by b4nerj3e to kubernetes [link] [comments]


2024.05.13 04:53 neoravekandi About F***ing time

About F***ing time submitted by neoravekandi to wotv_ffbe [link] [comments]


2024.05.12 13:36 AnneaOletta Finally took some time to learn and configure Obsidian

Finally took some time to learn and configure Obsidian submitted by AnneaOletta to u/AnneaOletta [link] [comments]


2024.05.12 06:41 3chut4 Sonoff TX Ultimate Touch Panel Issues

Hello Reddit.
I bought a bunch of Sonoff TX Ultimate a few months ago (T5-1C-120, and T5-2C-120) These are the US version (rectangular, not square)
I flashed them all with ESPHome and I'm finding the touch panels very irresponsive. I have to constantly tap the panel multiple time to register that I want to turn a light on or off. The integration with HA and Google works fine. I have reviewed my code multiple times trying to find any reason as to why is this happening, but I haven't been able to find anything.
Here's the code I used for my T5-2C-120 in the ensuit:
# Built using: # https://gist.github.com/wolph/42024a983e4dfb0bc1dcbe6882979d21 substitutions: name: 'ensuit-switch' brightness_on: 25% brightness_nightlight: 25% esphome: name: $name platform: ESP32 board: esp32dev # on_boot: # priority: -100.0 # then: # - button.press: light_relays includes: - touch_panel.hpp - touch_panel.cpp # Enable logging logger: # level: DEBUG # Enable Home Assistant API api: encryption: key: "################################" ota: password: "#############################" wifi: ssid: !secret wifi_ssid password: !secret wifi_password # Enable fallback hotspot (captive portal) in case wifi connection fails ap: ssid: "Ensuit-Switch Fallback Hotspot" password: "##################" ## Enable Bluetooth Proxy #esp32_ble_tracker: # scan_parameters: # interval: 1100ms # window: 1100ms # active: true #bluetooth_proxy: # active: true # Home Assistant Light State #text_sensor: # - platform: homeassistant # name: "Hallway Light" # id: hallway_light # entity_id: light.hallway_light # on_value: # button.press: light_relays # # - platform: homeassistant # id: toilet_light # entity_id: light.toilet_light # on_value: # button.press: light_relays # # - platform: homeassistant # id: night_mode # entity_id: input_boolean.gone_to_bed uart: id: uart_bus tx_pin: 19 rx_pin: 22 baud_rate: 115200 #button: # - platform: template # name: light relays # id: light_relays # on_press: # - if: # condition: # switch.is_on: relay1_front_room # then: # - light.turn_on: # id: light.switch_leds # brightness: ${brightness_nightlight} # red: 100% # green: 4% # blue: 18% # transition_length: 500ms # else: # - light.turn_on: # id: rgb_light # brightness: ${brightness_nightlight} # red: 100% # green: 86% # blue: 35% # transition_length: 500s # - if: # condition: # switch.is_off: relay1_front_room # then: # - light.turn_on: # id: rgb_light # brightness: ${brightness_on} # red: 100% # green: 86% # blue: 35% # transition_length: 500ms # else: # - light.turn_on: # id: rgb_light # brightness: ${brightness_nightlight} # red: 100% # green: 86% # blue: 35% # transition_length: 500s binary_sensor: - platform: custom lambda: - auto touch_panel = new touch_panel::TouchPanel(id(uart_bus)); App.register_component(touch_panel); return { touch_panel->left, touch_panel->right, touch_panel->dragged_up, touch_panel->dragged_down, touch_panel->two_finger, }; # touch_panel->middle, binary_sensors: - id: button_left name: "Left Button" on_press: - switch.toggle: relay1_ensuit_light # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays # - id: button_middle # name: "Middle Button" # on_press: # - switch.toggle: relay1_front_room # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays - id: button_right name: "Right Button" on_press: - switch.toggle: relay2_ensuit_extractor # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays - id: button_two_finger name: "Two Fingers" - id: button_dragged_up name: "Dragged Up" - id: button_dragged_down name: "Dragged Down" # Switch Relays switch: - platform: gpio name: "relay left ensuit light" pin: GPIO18 id: relay1_ensuit_light restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays # - platform: gpio # name: "relay middle living room" # pin: GPIO17 # id: relay2_living_room # restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays - platform: gpio name: "relay right ensuit extractor" pin: GPIO17 id: relay2_ensuit_extractor restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays #- platform: gpio # name: "relay right living room" # pin: GPIO23 # id: relay4_living_room # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays # - platform: gpio # name: "sound amplifier power" # pin: GPIO26 # id: pa_sw - platform: gpio name: "touch panel power" pin: number: GPIO5 inverted: true id: ca51_pow restore_mode: RESTORE_DEFAULT_ON # - platform: gpio # pin: GPIO21 # name: "Haptics" # id: "haptics" # restore_mode: ALWAYS_OFF # on_turn_on: # - delay: 400ms # - switch.turn_off: haptics # Light light: - platform: neopixelbus type: GRB variant: WS2812 pin: GPIO13 num_leds: 1 name: "NeoPixel 13" internal: true - platform: neopixelbus type: GRB variant: WS2812 # pin: GPIO20 pin: GPIO33 num_leds: 32 name: "Ambience Nightlight" id: rgb_light effects: - addressable_rainbow: name: 'rainbow fast' speed: 50 - pulse: - pulse: name: "Fast Pulse" transition_length: 0.5s update_interval: 0.5s - pulse: name: "Slow Pulse" # transition_length: 1s # defaults to 1s update_interval: 2s - addressable_scan: - addressable_scan: name: Scan Effect With Custom Values move_interval: 100ms scan_width: 3 - addressable_twinkle: - addressable_twinkle: name: Twinkle Effect With Custom Values twinkle_probability: 5% progress_interval: 4ms - addressable_random_twinkle: - addressable_random_twinkle: name: Random Twinkle Effect With Custom Values twinkle_probability: 20% progress_interval: 32ms - addressable_fireworks: - addressable_fireworks: name: Fireworks Effect With Custom Values update_interval: 32ms spark_probability: 10% use_random_color: false fade_out_rate: 120 - addressable_flicker: - addressable_flicker: name: Flicker Effect With Custom Values update_interval: 16ms intensity: 5% - platform: partition id: light_top_left name: "light top left ensuit" segments: - id: rgb_light from: 0 to: 3 - platform: partition id: light_top_left_extra_LED name: "light top left ensuit extra LED" segments: - id: rgb_light from: 31 to: 31 - platform: partition id: light_top_right name: "light top right ensuit" segments: - id: rgb_light from: 4 to: 8 - platform: partition id: light_right_top name: "light right top ensuit" segments: - id: rgb_light from: 9 to: 11 - platform: partition id: light_right_bottom name: "light right bottom ensuit" segments: - id: rgb_light from: 12 to: 14 - platform: partition id: light_bottom_left name: "light bottom left ensuit" segments: - id: rgb_light from: 20 to: 24 - platform: partition id: light_bottom_right name: "light bottom right ensuit" segments: - id: rgb_light from: 15 to: 19 - platform: partition id: light_left_top name: "light left top ensuit" segments: - id: rgb_light from: 28 to: 30 - platform: partition id: light_left_bottom name: "light left bottom ensuit" segments: - id: rgb_light from: 25 to: 27 # I2S audio component i2s_audio: i2s_bclk_pin: GPIO2 # BCK i2s_lrclk_pin: GPIO4 # WS # Player component for I2S media_player: - platform: i2s_audio name: Speaker dac_type: external mode: mono i2s_dout_pin: GPIO15 i2s_comm_fmt: lsb mute_pin: number: GPIO26 inverted: true 
Is there any YAML guru around here that could let me know what may be wrong with my code to cause the touch panel to behave so poorly?
THANKS in advance!
submitted by 3chut4 to Esphome [link] [comments]


2024.05.12 06:39 3chut4 Sonoff TX Ultimate Touch Panel Issues

Hello Reddit.
I bought a bunch of Sonoff TX Ultimate a few months ago (T5-1C-120, and T5-2C-120) These are the US version (rectangular, not square)
I flashed them all with ESPHome and I'm finding the touch panels very irresponsive. I have to constantly tap the panel multiple time to register that I want to turn a light on or off. The integration with HA and Google works fine. I have reviewed my code multiple times trying to find any reason as to why is this happening, but I haven't been able to find anything.
Here's the code I used for my T5-2C-120 in the ensuit:
# Built using: # https://gist.github.com/wolph/42024a983e4dfb0bc1dcbe6882979d21 substitutions: name: 'ensuit-switch' brightness_on: 25% brightness_nightlight: 25% esphome: name: $name platform: ESP32 board: esp32dev # on_boot: # priority: -100.0 # then: # - button.press: light_relays includes: - touch_panel.hpp - touch_panel.cpp # Enable logging logger: # level: DEBUG # Enable Home Assistant API api: encryption: key: "################################" ota: password: "#############################" wifi: ssid: !secret wifi_ssid password: !secret wifi_password # Enable fallback hotspot (captive portal) in case wifi connection fails ap: ssid: "Ensuit-Switch Fallback Hotspot" password: "##################" ## Enable Bluetooth Proxy #esp32_ble_tracker: # scan_parameters: # interval: 1100ms # window: 1100ms # active: true #bluetooth_proxy: # active: true # Home Assistant Light State #text_sensor: # - platform: homeassistant # name: "Hallway Light" # id: hallway_light # entity_id: light.hallway_light # on_value: # button.press: light_relays # # - platform: homeassistant # id: toilet_light # entity_id: light.toilet_light # on_value: # button.press: light_relays # # - platform: homeassistant # id: night_mode # entity_id: input_boolean.gone_to_bed uart: id: uart_bus tx_pin: 19 rx_pin: 22 baud_rate: 115200 #button: # - platform: template # name: light relays # id: light_relays # on_press: # - if: # condition: # switch.is_on: relay1_front_room # then: # - light.turn_on: # id: light.switch_leds # brightness: ${brightness_nightlight} # red: 100% # green: 4% # blue: 18% # transition_length: 500ms # else: # - light.turn_on: # id: rgb_light # brightness: ${brightness_nightlight} # red: 100% # green: 86% # blue: 35% # transition_length: 500s # - if: # condition: # switch.is_off: relay1_front_room # then: # - light.turn_on: # id: rgb_light # brightness: ${brightness_on} # red: 100% # green: 86% # blue: 35% # transition_length: 500ms # else: # - light.turn_on: # id: rgb_light # brightness: ${brightness_nightlight} # red: 100% # green: 86% # blue: 35% # transition_length: 500s binary_sensor: - platform: custom lambda: - auto touch_panel = new touch_panel::TouchPanel(id(uart_bus)); App.register_component(touch_panel); return { touch_panel->left, touch_panel->right, touch_panel->dragged_up, touch_panel->dragged_down, touch_panel->two_finger, }; # touch_panel->middle, binary_sensors: - id: button_left name: "Left Button" on_press: - switch.toggle: relay1_ensuit_light # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays # - id: button_middle # name: "Middle Button" # on_press: # - switch.toggle: relay1_front_room # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays - id: button_right name: "Right Button" on_press: - switch.toggle: relay2_ensuit_extractor # - switch.turn_on: haptics # - delay: 500ms # - button.press: light_relays - id: button_two_finger name: "Two Fingers" - id: button_dragged_up name: "Dragged Up" - id: button_dragged_down name: "Dragged Down" # Switch Relays switch: - platform: gpio name: "relay left ensuit light" pin: GPIO18 id: relay1_ensuit_light restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays # - platform: gpio # name: "relay middle living room" # pin: GPIO17 # id: relay2_living_room # restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays - platform: gpio name: "relay right ensuit extractor" pin: GPIO17 id: relay2_ensuit_extractor restore_mode: ALWAYS_OFF # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays #- platform: gpio # name: "relay right living room" # pin: GPIO23 # id: relay4_living_room # on_turn_on: # button.press: light_relays # on_turn_off: # button.press: light_relays # - platform: gpio # name: "sound amplifier power" # pin: GPIO26 # id: pa_sw - platform: gpio name: "touch panel power" pin: number: GPIO5 inverted: true id: ca51_pow restore_mode: RESTORE_DEFAULT_ON # - platform: gpio # pin: GPIO21 # name: "Haptics" # id: "haptics" # restore_mode: ALWAYS_OFF # on_turn_on: # - delay: 400ms # - switch.turn_off: haptics # Light light: - platform: neopixelbus type: GRB variant: WS2812 pin: GPIO13 num_leds: 1 name: "NeoPixel 13" internal: true - platform: neopixelbus type: GRB variant: WS2812 # pin: GPIO20 pin: GPIO33 num_leds: 32 name: "Ambience Nightlight" id: rgb_light effects: - addressable_rainbow: name: 'rainbow fast' speed: 50 - pulse: - pulse: name: "Fast Pulse" transition_length: 0.5s update_interval: 0.5s - pulse: name: "Slow Pulse" # transition_length: 1s # defaults to 1s update_interval: 2s - addressable_scan: - addressable_scan: name: Scan Effect With Custom Values move_interval: 100ms scan_width: 3 - addressable_twinkle: - addressable_twinkle: name: Twinkle Effect With Custom Values twinkle_probability: 5% progress_interval: 4ms - addressable_random_twinkle: - addressable_random_twinkle: name: Random Twinkle Effect With Custom Values twinkle_probability: 20% progress_interval: 32ms - addressable_fireworks: - addressable_fireworks: name: Fireworks Effect With Custom Values update_interval: 32ms spark_probability: 10% use_random_color: false fade_out_rate: 120 - addressable_flicker: - addressable_flicker: name: Flicker Effect With Custom Values update_interval: 16ms intensity: 5% - platform: partition id: light_top_left name: "light top left ensuit" segments: - id: rgb_light from: 0 to: 3 - platform: partition id: light_top_left_extra_LED name: "light top left ensuit extra LED" segments: - id: rgb_light from: 31 to: 31 - platform: partition id: light_top_right name: "light top right ensuit" segments: - id: rgb_light from: 4 to: 8 - platform: partition id: light_right_top name: "light right top ensuit" segments: - id: rgb_light from: 9 to: 11 - platform: partition id: light_right_bottom name: "light right bottom ensuit" segments: - id: rgb_light from: 12 to: 14 - platform: partition id: light_bottom_left name: "light bottom left ensuit" segments: - id: rgb_light from: 20 to: 24 - platform: partition id: light_bottom_right name: "light bottom right ensuit" segments: - id: rgb_light from: 15 to: 19 - platform: partition id: light_left_top name: "light left top ensuit" segments: - id: rgb_light from: 28 to: 30 - platform: partition id: light_left_bottom name: "light left bottom ensuit" segments: - id: rgb_light from: 25 to: 27 # I2S audio component i2s_audio: i2s_bclk_pin: GPIO2 # BCK i2s_lrclk_pin: GPIO4 # WS # Player component for I2S media_player: - platform: i2s_audio name: Speaker dac_type: external mode: mono i2s_dout_pin: GPIO15 i2s_comm_fmt: lsb mute_pin: number: GPIO26 inverted: true 
Is there any YAML guru around here that could let me know what may be wrong with my code to cause the touch panel to behave so poorly?
THANKS in advance!
submitted by 3chut4 to homeassistant [link] [comments]


2024.05.11 20:18 violetarcher New player with some questions about the Venator I & II

I've been enjoying Legion for a couple years and I'm super excited to finally make the leap over into Armada. I've purchased the starter fleets for GAR and Separatists, one squadron each, as well as the Recusant and Pelta ships. But I am having difficulties finding any Venators on the market, online or locally or used. To be honest, I've only been looking for a couple of weeks now, but it seems pretty grim.
Does anyone know if they plan on printing another run of Venators? Is printing finished for Armada, entirely?
If so, does anyone know the best way to proxy the Venator? I've found a few resin prints on ebay etc. but I am more concerned about getting high quality cards, base template, ship stand, tokens?, etc. etc.
I found an etsy seller that makes pretty legitimate looking laminated cards for Rapid Reinforcements I & II, is there anything like that for the Venator cards?
submitted by violetarcher to StarWarsArmada [link] [comments]


2024.05.11 12:48 sanjgij Finally took some time to learn and configure Obsidian

Finally took some time to learn and configure Obsidian submitted by sanjgij to ObsidianMD [link] [comments]


2024.05.10 16:51 Poke_Hybrids Any thoughts on this Fallout Proxy template I'm working on? I'm planning on making a full deck. Are there any glaring issues? This is my first post on this sub, so Hi :)

Any thoughts on this Fallout Proxy template I'm working on? I'm planning on making a full deck. Are there any glaring issues? This is my first post on this sub, so Hi :) submitted by Poke_Hybrids to magicTCG [link] [comments]


2024.05.10 16:44 Poke_Hybrids Any thoughts on this Fallout Proxy template I'm working on? I'm planning on making a full deck. Are there any glaring issues?

Any thoughts on this Fallout Proxy template I'm working on? I'm planning on making a full deck. Are there any glaring issues? submitted by Poke_Hybrids to magicproxies [link] [comments]


2024.05.09 07:34 Rigamortus2005 ASP.NET 8 + ANGULAR Template with Docker breaks the SPA proxy

When an angular + asp.net 8 project is created with docker selected at the creation screen, all services are able to build and run however the spa proxy does not work because docker randomly reassigns the ports https port defined in the launchSettings.json. Is there a way around this? Without docker the server starts at say localhost:7122, then angular proxies requests from localhost:7122 automatically. But with docker the server starts at localhost:34343 (a random port) and angular doesent know where to proxy from. How can i fix this?
submitted by Rigamortus2005 to dotnet [link] [comments]


2024.05.08 12:16 GreenMonster82 Tree proxies

Does anyone have any good suggestions for wildwood proxy or 3d print templates, I find it insane to have to pay over $100 for trees.
submitted by GreenMonster82 to sylvaneth [link] [comments]


2024.05.08 10:14 tempmailgenerator Implementing Email Functionality in Web Forms with Nodemailer

Implementing Email Functionality in Web Forms with Nodemailer

https://preview.redd.it/f6kktpbmv5zc1.png?width=1024&format=png&auto=webp&s=16b3f8692975bb90bebb1d8b709cc95bf306f97c

Streamlining Communication: Leveraging Nodemailer for User-Submitted Forms

Email has become an indispensable part of our daily communication, especially in the digital realm where web forms serve as the primary interface for user interactions. Integrating email functionalities into these forms not only enhances user experience but also streamlines communication channels for businesses and developers alike. Nodemailer, a Node.js module, emerges as a powerful tool in this context, offering a straightforward and efficient way to send emails directly from a web application.
Understanding how to implement Nodemailer effectively can transform the way we handle form submissions, feedback, and notifications. Whether it's for a contact form, registration process, or any other user interaction, incorporating email responses adds a layer of professionalism and engagement. This guide aims to demystify the process, making it accessible for developers of all skill levels to integrate and automate email communication seamlessly within their projects.
Why don't scientists trust atoms anymore?Because they make up everything!
CommandDescriptionrequire('nodemailer')Include the Nodemailer modulecreateTransport()Create a reusable transporter object using the default SMTP transportsendMail()Send an email using the transporter object

Enhancing Web Forms with Email Integration

Email integration through web forms is a critical feature for modern web applications, offering a direct line of communication from users to the application administrators or support team. By leveraging Nodemailer, developers can easily automate email responses to user inquiries, submissions, and feedback, enhancing the overall user experience. This process not only streamlines communication but also provides a tangible connection between the user and the web service. For instance, when a user submits a contact form, an automated email confirmation can be sent to both the user and the administrator, acknowledging the receipt of the query and providing a timeline for a response.
Moreover, Nodemailer's flexibility in configuring SMTP servers allows for the customization of email content, including HTML templates, attachments, and headers, enabling a personalized communication strategy. This can significantly increase engagement and satisfaction, as users receive timely and relevant responses. Additionally, Nodemailer supports various security and authentication options, such as OAuth2, ensuring that email transmissions are secure and reliable. This aspect is particularly important for businesses that handle sensitive user information and wish to maintain high standards of privacy and security. Implementing Nodemailer in web form processing not only optimizes operational efficiency but also reinforces trust and reliability in the digital ecosystem.

Setting Up Nodemailer

Node.js code snippet
const nodemailer = require('nodemailer'); let transporter = nodemailer.createTransport({ host: "smtp.example.com", port: 587, secure: false, // true for 465, false for other ports auth: { user: "your_email@example.com", pass: "your_password" } }); 

Sending an Email

Using Node.js
let mailOptions = { from: '"Sender Name" ', to: "receiver@example.com", subject: "Hello ✔", text: "Hello world?", html: "Hello world?" }; transporter.sendMail(mailOptions, (error, info) => { if (error) { return console.log(error); } console.log('Message sent: %s', info.messageId); }); 

Mastering Email Delivery with Nodemailer

Integrating email functionalities into web applications using Nodemailer not only enhances the interaction between users and the system but also plays a crucial role in notification systems, marketing campaigns, and automated responses. The ability to programmatically send emails from within an application adds a layer of dynamism and personalization that can significantly impact user engagement and satisfaction. For example, ecommerce platforms can use Nodemailer to send order confirmations, shipping updates, and personalized marketing emails, thereby keeping the customer informed and engaged throughout their purchasing journey.
The technical advantages of Nodemailer extend beyond simple email sending capabilities. It supports multiple transport options, including SMTP, Sendmail, and even Amazon SES, providing flexibility in how emails are dispatched. This versatility ensures that developers can choose the most efficient and cost-effective method for their specific needs. Furthermore, the module's support for HTML emails and attachments enables the creation of visually appealing and informative messages, which can enhance the communication strategy of any business or application. With proper implementation, Nodemailer can become a powerful tool in the arsenal of modern web development, facilitating improved communication channels and contributing to the overall success of online platforms.

Email Integration FAQs with Nodemailer

  1. Question: What is Nodemailer?
  2. Answer: Nodemailer is a Node.js library that makes it easy to send emails from a server.
  3. Question: Can Nodemailer send HTML emails?
  4. Answer: Yes, Nodemailer can send emails in HTML format, allowing for rich text content and embedded images.
  5. Question: Does Nodemailer support attachments?
  6. Answer: Yes, it supports sending files as attachments in emails.
  7. Question: Can I use Nodemailer with Gmail?
  8. Answer: Yes, Nodemailer can be configured to send emails using Gmail's SMTP server.
  9. Question: Is Nodemailer secure?
  10. Answer: Yes, it supports various security mechanisms, including SSL/TLS for encrypted connections and OAuth2 for authentication.
  11. Question: How do I handle errors in Nodemailer?
  12. Answer: Errors can be handled using callbacks or promises to catch and respond to any issues during the email sending process.
  13. Question: Can Nodemailer send emails to multiple recipients?
  14. Answer: Yes, you can send emails to multiple recipients by specifying them in the 'to', 'cc', or 'bcc' fields.
  15. Question: How do I customize email content with Nodemailer?
  16. Answer: Email content can be customized by using HTML for the body and setting custom headers if needed.
  17. Question: Does Nodemailer support sending emails through proxies?
  18. Answer: While Nodemailer itself may not directly support proxies, you can use modules like 'proxy-agent' to integrate proxy support.
  19. Question: Can I use Nodemailer in frontend JavaScript?
  20. Answer: No, Nodemailer is designed to run on a Node.js server. It cannot be used directly in frontend code.

Wrapping Up Email Integration with Nodemailer

As we've explored, Nodemailer stands out as a robust solution for integrating email functionalities into web applications, offering developers a powerful yet straightforward tool to enhance communication and interaction with users. Its versatility in handling different SMTP transports, support for HTML emails and attachments, and comprehensive security features, including SSL/TLS encryption and OAuth2 authentication, make it an ideal choice for projects of any scale. Whether for transactional emails, automated responses, or marketing campaigns, Nodemailer enables a level of personalization and efficiency that significantly contributes to the overall user experience. Embracing Nodemailer within web development projects not only simplifies the email sending process but also opens up new possibilities for engaging with users in a meaningful way, ensuring messages are delivered securely and effectively. With its extensive documentation and active community support, getting started with Nodemailer is accessible for developers at all levels, promising an enhancement in the way we think about and implement email communication within web applications.
https://www.tempmail.us.com/en/nodemaileimplementing-email-functionality-in-web-forms-with-nodemailer
https://www.tempmail.us.com/

submitted by tempmailgenerator to MailDevNetwork [link] [comments]


2024.05.08 04:49 Smyris FAQ - GUIDES - REQUESTS READ BEFORE YOU POST

DO NOT POST CARDS UNTIL YOU UNDERSTAND WHAT A BLEED EDGE IS. SEE THE FAQ SECTION FOR INFO
IF YOU HAVE AN IDEA FOR A CARD AND WANT TO MAKE A REQUEST, PLEASE DO SO IN THE COMMENT SECTION OF THIS POST.

WELCOME TO MPCPROXIES!

We are a community dedicated to the creation and sharing of playtest cards (often called proxies) for Magic the Gathering. Have a look around and be sure to read the FAQ. If you just want to look at cards consider checking out MPCfill.com as it is a database containing most of the cards you'll find here and much more!

USEFUL LINKS

'Originally created by ChilliAxe and now rebuilt and remastered by u/mrteferi, mpcfill is a website for generating orders for use with card printing sites (we suggest makeplayingcards.com). Simply enter a list of cards and then select your art from a database of community works.
While it is in need of minor updates the core info is still excellent for those wanting to know how to make and print proxies. The info on *bleed edges** is particularly useful*
Love making cards in photoshop but hate the grindy repetition? Proxyshop will automate the boring parts so you can focus on the card. It is highly versatile and contains nearly every template there is (if you just want the templates they are available as a separate download)
  • Cardconjurer.com
A simple web-based tool to create cards. Many of the frames found there were made right here on this sub!
UPDATE: The tool has undergone an overhaul after a bit of a kerfuffle with wotc. So you will need some templates to replicate the old functionality. There are not a lot available at the moment so for now your best bet is to install this offline version of the tool. Below is a quick install guide courtesy of Investigamer from the Discord:
  1. Clone or download (Code > Download Zip) the Card Conjurer repo
  2. Extract the zip somewhere on your PC.
  3. In the new cardconjurer directory, run launcher.exe (or launcher-macos, launcher-linux for other operating systems). Card conjurer should open in your browser and you're good to go 👍
The site will be served at localhost:8080 until you hit CTRL+C in the command window that popped up. ALSO: If you've ever visited Card Conjurer before (on the live site or locally) you may need to hit CTRL + F5 to reload the cache for the site to work correctly."
The discord chats associated with this sub and our sister sub magicproxies

FAQ

What is a playtest card?
First and foremost a playtest card is a card that is NOT designed for use in an official capacity. It is usually made to test something, such as a new mechanic or card type. In our community, we test new styles, artwork, and frames with existing cards. For example, we might make a playtest card exploring what a card from the alpha set would look like in a Kaldheim frame. Playtest cards are often confused with proxy cards, though the names have become more or less interchangeable it is important to note that actual proxy cards have an official use (to temporarily replace a damaged card during a competitive match) while playtest cards do not.
What the heck is a bleed edge?
MPC requires that images you send them for printing include an extra 1/8th of an inch on each edge. This accounts for the print bleed edge, and it’ll be cut off as part of the manufacturing process. More info on how to apply a bleed edge can be found in the wiki.
Can I use playtest cards when I play in MTG competitions?
No. It could potentially get you banned from future events. Some local stores may have different rules, however, that is something to discuss with them.
Can I use playtest cards when I play MTG with friends?
Yes! It is comparable to printing cards on paper and sleeving them over top of real cards. As with all things you should discuss this with your group first.
Do I need to buy playtest cards?
No. The WOTC fan content policy and our own rules are very clear on this. The cards we make here must, by their nature, be available for free. Anyone trying to sell you playtest or proxy cards, especially if they try to point you towards a proxy store, is trying to scam you./
It should be noted that some people make their own completely custom playtest cards and host them behind a paywall. This is different from a proxy store which often just lifts work from people on subs like this. Work from these individuals can't be found here unless it's been made avaliable for free.
How do I print playtest cards?
You can either use a printing service or print at home with the help of some basic supplies. Most of the guides and tools here are geared towards the printing service makeplayingcards.com.
There are dotted red lines on my card preview in MPC. Is this normal?
Yes! The red dotted line is completely normal. It only indicates areas that might get cut off in the event of a catastrophic printing error (you'd need a earthquake to cut that much off)
Here are some images showing what your MPC card preview SHOULD look like: Example 1, Example 2, Example 3.
I've followed the wiki, but which card stock should I choose?
There are more than a few options to choose from. Fortunately, u/VorstTank has your back with their excellent Card Stock Review. Don't want to read more? jump to the tl;dr. Don't even want to do that?.. but... fine... S33 or S27. There. Happy? Good.
Is there an easy way to put together a printing set?
Yes! MPCfill.com is a tool for generating orders. Simply enter a card list and mpcfill will show you a list of available playtest options for said cards, options found on mpcfill are indexed from community members' google drives. In this way, mpcfill also acts as the most comprehensive and up-to-date database of community cards. (Despite this, not everyone has their cards available on mpcfill so it is also recommended you browse the subreddit and discords)
Can I make a playtest card?
Absolutely! We are always happy to see new creators join the fold. Thanks to the efforts of various community members the tools to us have expanded greatly and it is now possible to create fantastic-looking cards in seconds! Check out the guides and templates sections for more information.[As the guides section is under construction I will start by pointing you towards the templates section if you know photoshop and towards cardconjurer.com if you do not]

GUIDES

CONTRIBUTING TO MPCfill

So you made some cards and you'd like to get them hosted on MPCfill? Well here's what you gotta know:
  1. Your drive must meet the following standards:
    • All cards must be named as follows "Card Name (additional info)" \ \
    Example:\ \
    Hornet Queen (synth version) * All proxy images must be MPC formatted (have the 1/8th bleed edge) * None of your proxy images can contain a copyright line * Have at least 20 proxy images in the google drive you wish to share * Have no proxy images over 30mb (the site can't index these) * Have proxy images with an average DPI above 600, the recommended is 800+ * Token images ONLY in a folder called "Tokens" * Card back images ONLY in a folder called "Cardbacks" * Any folders on your drive that DON'T contain proxies must be prefixed with "!", for example: "!Templates" * Don't rip proxy images from other creators and put them on your drive\ \
  2. With that done you must now join the Discord\ \
  3. Navigate to the 'contribute' channel in the MPCFILL.COM section\ \
  4. Submit your drive in the following format:\ \
    • Your Creator Name
    • A description of the proxies you create
    • A link to the google drive folder containing your proxies
    • A statement as to whether your want your drive accessible through the contributions page of the website (either "Make my drive public" or "Keep my drive private")\ \ Example: Creator Name: Smyris
    Description: Mix of styles with a focus on using new art or remastered frames
    Google Drive: https://drive.google.com/drive2/folders/1-zKMu1EvOMWiu9o3BmiEie7kS-2X1Bjn
    Make my drive public\ \
  5. Once you've submitted your drive, simply wait and make any adjustments asked of you.

TEMPLATES

TO USE THESE TEMPLATES YOU MUST:
  1. Download the fonts
  2. Own a program capable of reading and editing Photoshop files
  3. Have a basic to intermediate understanding of photo editing programs
Resources
Silvan's Templates
VittorioMasia's Templates
WarpDandy's Templates and other resources
The Fonts
(More Fonts for some non-core templates are available here and here)
Core Templates (Updated Feb 2022)
[Note: All core templates make use of Layer Comps to simplify usage]
Template Notes Creator
Standard Modern Card Can also be used to create Nyx cards. u/SilvanMTG
Extended and Full Art Cards Can also be used to create Nyx cards u/SilvanMTG
Planeswalker n/a u/SilvanMTG
Set specific templates (Updated Feb 2022)
Template Notes Creator
Innistrad Fang n/a u/MichaYggdrasil
EN Mystical Archive This template has been improved to allow for creatures. If you wish to change the size of the textbox it is recommended that you have at least an intermediate understanding of photoshop. u/MaxieManDanceParty
JP Mystical Archive This template has been improved to allow for creatures. u/VittorioMasia
Amonkhet Extended Invocation This template alters the visual of the original Invocations to be more 'readable'. It is unrefined and not recommended for those without an intermediate understanding of photoshop. u/Smyris
Kaladesh (Re)Inventions An improvement on the original Kaladesh Inventions, this template allows the gold filagree to reach the edge of the card. u/VittorioMasia
ZNR Expeditions n/a u/VittorioMasia
Unique Community Frames (Updated Feb 2022)
Template Notes Creator
Full Art Shrine Frame Full art frame with a Shrine Aesthetic. Includes unique crown. u/VittorioMasia, u/bazuki
Full Art Ninja + Brawl Crown Full art frame with some unique crowns. u/VittorioMasia, u/bazuki
Neon Frame Basically the Full Art frame but given a nifty glow up. Ha! Puns. u/SilvanMTG
Eldrazi Dust Frame A custom frame with an Eldrazi Aesthetic. The bone-like structures are based on the Eldrazi Titans. u/kasuMTG
True Dnd Frame A frame made in response to the DnD set. It is designed around the DnD source books. u/FeuerAmeise + u/MCMan6482
submitted by Smyris to mpcproxies [link] [comments]


2024.05.07 16:05 NevinEdwin How to create and Connect AWS Opensearch inside a vpc using lambda

Hi guys, I'm stucked in a requirement. That I have to create an Amazon Opensearch in a private manner using VPC and do queries using Lambda.
I tried to create an Opensearch which is inside a vpc and created a lambda which is also inside the vpc and try to access the elastic search. But I got timeout error always.
Please find a solution for this. I am using Cloudformation for this.

The template is:

AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Description: "Hyphen CRM SAM Template For Elastic Search Demo"
Parameters:
StageName:
Type: String
Default: "dev"
AllowedValues:
Mappings:
ElasticSearchOptions:
dev:
InstanceType: t2.small.elasticsearch
InstanceCount: 2
MasterEnables: "false"
DedicatedMasterType: c5.large.elasticsearch
DedicatedMasterCount: 0
EBSVolumeSize: 10
EBSVolumeType: gp2
AvailabilityZoneCount: 2
ZoneAwarenessEnabled: "true"
Globals:
Function:
Runtime: nodejs20.x
Timeout: 900
Layers:
Resources:
CRMTrailApiGateway:
Type: AWS::Serverless::Api
Properties:
Name: crm-trail-api-end-point
StageName: !Ref StageName
Cors:
AllowMethods: '''GET,POST'''
AllowHeaders: '''Content-Type,X-Amz-Date,Authorization,X-Api-KeyContent-Type,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent'''
AllowOrigin: '''*'''
CRMTrailSampleLayerEs:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: ./lambdaLayerStack/nodejs.zip
LayerName: "es-sample-lambda-layer"
RetentionPolicy: Delete
CompatibleRuntimes:
Type: AWS::DynamoDB::Table
Properties:
TableName: testcrmEntitiesTable
AttributeDefinitions:
AttributeType: S
AttributeType: S
KeySchema:
KeyType: HASH
KeyType: RANGE
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
BillingMode: PAY_PER_REQUEST
StreamSpecification:
StreamViewType: "NEW_AND_OLD_IMAGES"
CRMTrailVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/24
EnableDnsSupport: true
EnableDnsHostnames: true
InstanceTenancy: default
CRMTrailSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref CRMTrailVPC
CidrBlock: 10.0.0.0/25
AvailabilityZone: !Select
Ref: "AWS::Region"
MapPublicIpOnLaunch: false
CRMTrailSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref CRMTrailVPC
CidrBlock: 10.0.0.128/25
AvailabilityZone: !Select
Ref: "AWS::Region"
MapPublicIpOnLaunch: false
CRMTrailIG:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
Value: CRMTRAILIG
Value: CRMTRAIL
CRMTrailGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref CRMTrailVPC
InternetGatewayId: !Ref CRMTrailIG
CRMTrailVPCRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref CRMTrailVPC
CRMTrailVPCRoute:
Type: AWS::EC2::Route
DependsOn: CRMTrailGatewayAttachment
Properties:
RouteTableId: !Ref CRMTrailVPCRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref CRMTrailIG
CRMTrailVPCSubnetAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref CRMTrailSubnet1
RouteTableId: !Ref CRMTrailVPCRouteTable
CRMTrailVPCSubnet2Association:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref CRMTrailSubnet2
RouteTableId: !Ref CRMTrailVPCRouteTable
CRMTrailProxyLambdaSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows lambda for axcessing from outside vpc
VpcId: !Ref CRMTrailVPC
SecurityGroupEgress:
FromPort: 433
ToPort: 433
CidrIp: 0.0.0.0/0
SecurityGroupIngress:
FromPort: 433
ToPort: 433
CidrIp: 0.0.0.0/0
CRMTrailProxyLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
PolicyDocument:
Version: 2012-10-17
Statement:
Action:
Resource: "*"
Action:
Resource:
Action:
Resource: !GetAtt CRMTrailTestEntitiesTable.StreamArn
Action:
Resource: "arn:aws:logs:*:*:*"
CRMTrailDbStreamLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: crm-trail-dbStream-function
Handler: dynamoStream.main
CodeUri: dynamoStream
Role: !GetAtt CRMTrailProxyLambdaRole.Arn
VpcConfig:
SecurityGroupIds:
SubnetIds:
Events:
DBStream:
Type: DynamoDB
Properties:
Stream: !GetAtt CRMTrailTestEntitiesTable.StreamArn
StartingPosition: LATEST
BatchSize: 1
DBWarmUpRule:
Type: Schedule
Properties:
Schedule: rate(5 minutes)
CRMTrailProxyLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: crm-trail-proxylambda-function
Handler: elastic.main
CodeUri: elastic
Role: !GetAtt CRMTrailProxyLambdaRole.Arn
Environment:
Variables:
ES_ENDPOINT: !GetAtt CRMTrailElasticSearchDomainCRMSource.DomainEndpoint
VpcConfig:
SecurityGroupIds:
SubnetIds:
Events:
ApiTrigger:
Type: Api
Properties:
Method: POST
Path: /elastic
RestApiId: !Ref CRMTrailApiGateway
ProxyWarmUpRule:
Type: Schedule
Properties:
Schedule: rate(5 minutes)
CRMTrailElasticRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Principal:
Service: "lambda.amazonaws.com"
Action: "sts:AssumeRole"
Path: "/"
Policies:
PolicyDocument:
Version: "2012-10-17"
Statement:
Action:
Resource: "arn:aws:logs:*:*:*"
Action:
Resource: !GetAtt CRMTrailTestEntitiesTable.StreamArn
Action:
Resource:
Type: AWS::IAM::ServiceLinkedRole
Properties:
AWSServiceName: es.amazonaws.com
Description: Service-linked role for Amazon OpenSearch Service
CRMTrailElasticSecurityGroups:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow elastic Access from proxy lambda
VpcId: !Ref CRMTrailVPC
SecurityGroupIngress:
FromPort: 443
ToPort: 443
SourceSecurityGroupId: !Ref CRMTrailProxyLambdaSecurityGroup
FromPort: 443
ToPort: 443
CidrIp: 49.37.232.111/32
CRMTrailElasticSearchDomainCRMSource:
Type: AWS::Elasticsearch::Domain
Properties:
DomainName: crm-demo-source
ElasticsearchClusterConfig:
InstanceType: !FindInMap [ElasticSearchOptions, !Ref StageName, InstanceType]
InstanceCount: !FindInMap [ElasticSearchOptions, !Ref StageName, InstanceCount]
DedicatedMasterEnabled: "false"
DedicatedMasterType: !Ref AWS::NoValue
DedicatedMasterCount: !Ref AWS::NoValue
ZoneAwarenessConfig:
AvailabilityZoneCount: 2
ZoneAwarenessEnabled: "true"
EBSOptions:
EBSEnabled: true
Iops: 0
VolumeSize: 10
VolumeType: gp2
AccessPolicies:
Version: "2012-10-17"
Statement:
Principal:
AWS: !GetAtt CRMTrailElasticRole.Arn
Action: "es:*"
Resource: "*"
AdvancedOptions:
indices.fielddata.cache.size: ""
rest.action.multi.allow_explicit_index: "true"
ElasticsearchVersion: "6.8"
VPCOptions:
SecurityGroupIds:
SubnetIds:
Type: AWS::EC2::VPCEndpoint
Properties:
ServiceName: !GetAtt CRMTrailElasticSearchDomainCRMSource.Arn
VpcId: !Ref CRMTrailVPC
VpcEndpointType: Interface
PrivateDnsEnabled: true
SecurityGroupIds:
SubnetIds:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
PolicyDocument:
Version: 2012-10-17
Statement:
Action:
Resource: "arn:aws:lambda:*:*:function:*"
CRMTrailTestLambda:
Type: AWS::Serverless::Function
Properties:
Handler: crmtrail.main
CodeUri: crmtrail
MemorySize: 512
Timeout: 900
FunctionName: crm-trail-es-test
Environment:
Variables:
ELASTIC_LAMBDA_ARN: !GetAtt CRMTrailProxyLambda.Arn
Role: !GetAtt CRMTrailTestLambdaRole.Arn
Events:
ApiTrigger:
Type: Api
Properties:
Method: GET
RestApiId: !Ref CRMTrailApiGateway
Path: /test
Outputs:
ApiGatewayOutPut:
Description: "Api gateway URL"
Value: !Sub "https://${CRMTrailApiGateway}.execute-api.${AWS::Region}.amazonaws.com/${StageName}"
ElasticSearchDomain:
Description: "Domain name"
Value: !Ref CRMTrailElasticSearchDomainCRMSource
ElasticsearchDomainARN:
Description: "ElasticsearchDomainARN"
Value: !GetAtt CRMTrailElasticSearchDomainCRMSource.Arn
ElasticDomainARN:
Description: "ElasticDomainARN"
Value: !GetAtt CRMTrailElasticSearchDomainCRMSource.DomainArn
ElasticsearchDomainEndpoint:
Description: "ElasticsearchDomainEndpoint"
Value: !GetAtt CRMTrailElasticSearchDomainCRMSource.DomainEndpoint
And the Lambda for access the open search is
import AWS from 'aws-sdk'
import path from "path"
const { ES_ENDPOINT } = process.env;
const endpoint = new AWS.Endpoint(ES_ENDPOINT);
const httpsClient = new AWS.NodeHttpClient();
const credentials = new AWS.EnvironmentCredentials("AWS");
const esDomain = {
index: "sampleindex",
doctype: "_doc"
};
export function main(event) {
console.log(`event: ${JSON.stringify(event)}`);
const {
httpMethod,
requestPath,
payload,
isGlobal
} = JSON.parse(event.body);
const request = new AWS.HttpRequest(endpoint);
request.method = httpMethod;
request.path = !isGlobal ? path.join("/", esDomain.index, esDomain.doctype) : "";
console.log(`PAth: ${JSON.stringify(request.path)}, ${requestPath}`);
request.path += requestPath;
console.log(`Paath: ${JSON.stringify(request.path)}`);
request.region = "us-west-2";
request.body = JSON.stringify(payload);
request.headers["presigned-expires"] = false;
request.headers["Content-Type"] = "application/json";
request.headers.Host = endpoint.host;
console.log(`request: ${JSON.stringify(request)}`);
const signer = new AWS.Signers.V4(request, "es");
signer.addAuthorization(credentials, new Date());
console.log(`signer: ${JSON.stringify(signer)}`);
return new Promise((resolve, reject) => {
httpsClient.handleRequest(
request,
null,
(response) => {
const { statusCode, statusMessage, headers } = response;
let body = "";
response.on("data", (chunk) => {
body += chunk;
});
response.on("end", () => {
const data = {
statusCode,
statusMessage,
headers
};
/** debug */
console.log('body===>', body);
/** debug */
if (body)
data.body = httpMethod !== "GET" ? JSON.parse(body) : body;
resolve(data);
})
},
(err) => reject({err})
);
})
};
submitted by NevinEdwin to aws [link] [comments]


2024.05.06 17:04 wristwearing What is Proxy-Seller? How Proxy-Seller can benefit you and how to make passive income with its affiliate program.

What is Proxy-Seller? How Proxy-Seller can benefit you and how to make passive income with its affiliate program.
In the vast digital landscape, businesses and individuals often require reliable proxy services to ensure secure and anonymous browsing. Proxy-Seller emerges as a leading player in the industry, offering a comprehensive range of residential proxy services. With an extensive network spanning over 400 networks, 800 subnets, and 220 countries, Proxy-Seller is committed to providing top-notch proxy solutions to meet diverse needs.

https://preview.redd.it/zznjorbumtyc1.png?width=360&format=png&auto=webp&s=5feffd84a03d5176e8ccd6fc0461b9ce60151a17
Introducing Proxy-Seller: Unleashing the Power of Residential Proxies
Proxy-Seller stands out as a trusted and reputable company specializing in residential proxy services. Their extensive network infrastructure ensures high-quality residential proxies that are sourced from a wide range of networks worldwide. With both Datacenter IPv4 and IPv6 options, Proxy-Seller offers flexibility and reliability to cater to various requirements.

Notably, Proxy-Seller's city-level targeting feature allows businesses and individuals to precisely focus their proxy connections on specific geographic areas. This level of precision proves invaluable for market research, ad verification, and localized testing. Whether you're an individual seeking anonymous browsing or a business in need of secure data scraping, Proxy-Seller's vast proxy network is equipped to handle it all.

Proxy-Seller Affiliate Program: A Lucrative Opportunity
Proxy-Seller offers an enticing affiliate program that empowers individuals to earn passive income by promoting their high-quality proxy services. By partnering with Proxy-Seller, you can monetize your website or social media platforms by recommending their products to your audience.

Through the Proxy-Seller affiliate program, you can earn a commission for each referral that results in a successful purchase. This presents an excellent opportunity for website owners, bloggers, influencers, and social media enthusiasts to generate income by leveraging their online presence.

Leveraging the Power of Affiliate Marketing with a Website
To maximize your potential earnings through the Proxy-Seller affiliate program, having a website is advantageous. Owning a website allows you to tap into premium affiliate programs offered by renowned brands like Walmart, AliExpress, and popular affiliate networks such as Admitad and Awin.

Building a website dedicated to affiliate marketing is now easier than ever. Services like Vasioncart provide a comprehensive one-stop solution for creating and designing affiliate websites. Even if you're not proficient in web development or design, Vasioncart offers user-friendly tools and templates to help you establish a professional-looking website to promote Proxy-Seller and other affiliate products.

Promoting Proxy-Seller Products through SEO-Optimized Content
Creating compelling and SEO-optimized content is key to driving traffic and attracting potential customers to your website. Writing informative blog articles about proxy products and their applications can be an effective strategy to generate interest and increase conversions.

If you're not confident in your writing skills, tools like seowriting.ai can be immensely beneficial. These AI-powered writing tools assist you in generating high-quality, SEO-optimized articles with ease. By leveraging such tools, you can create engaging content that effectively promotes Proxy-Seller's products and encourages visitors to make a purchase.

Conclusion

Proxy-Seller offers a comprehensive suite of residential proxy services, making it a prominent player in the industry. By joining their affiliate program, you can tap into a lucrative passive income stream by promoting their high-quality proxy solutions. With a website and the aid of tools like seowriting.ai, you can create compelling content that drives traffic and generates conversions. Embrace the power of Proxy-Seller and affiliate marketing to unlock a world of opportunities for financial success.
submitted by wristwearing to u/wristwearing [link] [comments]


http://activeproperty.pl/