Smartboard input output tables

Unsure how to proceed on a simple code

2024.05.14 18:32 zmoss1 Unsure how to proceed on a simple code

I recently started a college course that teaches beginner java. Wow I am hooked now! I have finished all of the assignments due and decided I wanted to experiment and write my own code outside of the class just for fun and growth. So far I have been able to apply what I have learned to make the first aspect of my code successful but I'm getting pretty hung up on the second part.
I'm writing a program that will ideally calculate the number of fish needed for someone to reach level 99 fishing in RuneScape. So far within the console I've been able to prompt the user to enter their current xp as in integer and the program will then output the xp remaining to reach level 99. After that the console then prompts the user to enter the type of fish.
This is where I'm stuck. Ideally I want the user to be able to simply type the name of the fish and have the program calculate the number needed (each type of fish has a specific xp value) to reach level 99. My question is: How do I accept user input in the form of a word that will have a numerical value assigned to it?
runescapeFishing
submitted by zmoss1 to javahelp [link] [comments]


2024.05.14 18:26 archiegillis Seeking Feedback on Real-Time Multilingual Communication System

Hey Reddit community,
I'm currently doing some customer discovery for an new communication system that I’ve been working on, and I’d love to get your feedback!
The Innovation: My invention utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio using the user's vocal profile. The core feature of this system is that it enables real-time, in-person multilingual communication with no delay in conversations. The silent input does not interfere with the audio output, ensuring seamless and immediate translation.
Patent Pending: I have patent pending status for this technology and am in the process of determining if it truly addresses a significant pain point for potential users.
The Question: I’m particularly interested in understanding whether or not delays in current translation and communication systems are a major issue for you. Do you find it bothersome when there's a lag between spoken input and translated output? How much of a difference would a system with no delay make in your daily communications, whether it be in customer service, education, healthcare, or business meetings?
Use Cases: Here are a few scenarios where this technology could be particularly beneficial:
  1. Multilingual Customer Service: Seamless communication with international clients in real time.
  2. Educational Settings: Real-time translation for multilingual classrooms.
  3. Healthcare: Bridging language gaps between healthcare providers and patients.
  4. International Business Meetings: Facilitating real-time, multilingual discussions.
Your Feedback: I would greatly appreciate your insights on the following:
Your feedback will be invaluable in helping me refine this technology to better meet the needs of potential users.
Thanks in advance for your help!
Abstract: The invention focuses on a novel communication system that utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio that uses the user's vocal profile for real-time communications in different languages. While it leverages Large Language Models (LLMs) and text conversion, the core feature is enabling real-time, in-person multilingual communication. (Think Google Glasses).
Key Application: Real-Time Multilingual Communication: Converts visual data into expressive audio in real-time using the user's vocal profile, facilitating in-person conversations in different languages.
Novel Elements
  1. Visual to Expressive Audio Conversion: Utilizes lip-reading algorithms to transform silent lip movements into expressive audio.
  2. Real-Time Multilingual Translation: Converts visual data into different languages in real time, using the user's vocal profile.
  3. Personalized Voice Synthesis: Synthesizes the translated audio with personalized voice profiles, retaining emotional nuances from visual input.
  4. Input Does Not Interfere with Output: A listener can hear the communication as it is being said and with much less delay making the conversation flow more seamlessly.
Use Cases
  1. Multilingual Customer Service:
    • Scenario: A customer service representative communicates with international clients in person.
    • Process: The representative’s silent lip movements are captured and converted into expressive audio in the client's language using the representative's vocal profile.
    • Benefit: Facilitates seamless in-person communication across different languages without the need for audio input from the speaker.
  2. Educational Settings:
    • Scenario: A multilingual classroom where the teacher instructs students from various linguistic backgrounds.
    • Process: The teacher’s lip movements are captured and converted into expressive audio in the students’ preferred languages, using the teacher's vocal profile.
    • Benefit: Enhances understanding and participation in a diverse classroom, enabling real-time multilingual communication.
  3. Healthcare Communication:
    • Scenario: A doctor communicates with a patient who speaks a different language.
    • Process: The doctor’s silent lip movements are captured and converted into expressive audio in the patient's language using the doctor’s vocal profile.
    • Benefit: Ensures clear and real-time communication without relying on audio input, bridging language gaps between healthcare providers and patients.
  4. International Business Meetings:
    • Scenario: Executives from different countries participate in an in-person meeting.
    • Process: Each participant’s silent lip movements are captured and converted into expressive audio in the other participants' languages using their vocal profiles.
    • Benefit: Facilitates real-time, multilingual discussions, improving collaboration and decision-making across different languages.
Problem Worth Solving?

submitted by archiegillis to startups [link] [comments]


2024.05.14 18:21 archiegillis Seeking Feedback on Real-Time Multilingual Communication System

Hey Reddit community
I'm currently doing some customer discovery for an new communication system that I’ve been working on, and I’d love to get your feedback!
The Innovation: My invention utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio using the user's vocal profile. The core feature of this system is that it enables real-time, in-person multilingual communication with no delay in conversations. The silent input does not interfere with the audio output, ensuring seamless and immediate translation.
Patent Pending: I have patent pending status for this technology and am in the process of determining if it truly addresses a significant pain point for potential users.
The Question: I’m particularly interested in understanding whether or not delays in current translation and communication systems are a major issue for you. Do you find it bothersome when there's a lag between spoken input and translated output? How much of a difference would a system with no delay make in your daily communications, whether it be in customer service, education, healthcare, or business meetings?
Use Cases: Here are a few scenarios where this technology could be particularly beneficial:
  1. Multilingual Customer Service: Seamless communication with international clients in real time.
  2. Educational Settings: Real-time translation for multilingual classrooms.
  3. Healthcare: Bridging language gaps between healthcare providers and patients.
  4. International Business Meetings: Facilitating real-time, multilingual discussions.
Your Feedback: I would greatly appreciate your insights on the following:
Your feedback will be invaluable in helping me refine this technology to better meet the needs of potential users.
Thanks in advance for your help!
Abstract: The invention focuses on a novel communication system that utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio that uses the user's vocal profile for real-time communications in different languages. While it leverages Large Language Models (LLMs) and text conversion, the core feature is enabling real-time, in-person multilingual communication. (Think Google Glasses).

Key Application: Real-Time Multilingual Communication: Converts visual data into expressive audio in real-time using the user's vocal profile, facilitating in-person conversations in different languages.
Novel Elements
  1. Visual to Expressive Audio Conversion: Utilizes lip-reading algorithms to transform silent lip movements into expressive audio.
  2. Real-Time Multilingual Translation: Converts visual data into different languages in real time, using the user's vocal profile.
  3. Personalized Voice Synthesis: Synthesizes the translated audio with personalized voice profiles, retaining emotional nuances from visual input.
  4. Input Does Not Interfere with Output: A listener can hear the communication as it is being said and with much less delay making the conversation flow more seamlessly.
Use Cases
  1. Multilingual Customer Service:
    • Scenario: A customer service representative communicates with international clients in person.
    • Process: The representative’s silent lip movements are captured and converted into expressive audio in the client's language using the representative's vocal profile.
    • Benefit: Facilitates seamless in-person communication across different languages without the need for audio input from the speaker.
  2. Educational Settings:
    • Scenario: A multilingual classroom where the teacher instructs students from various linguistic backgrounds.
    • Process: The teacher’s lip movements are captured and converted into expressive audio in the students’ preferred languages, using the teacher's vocal profile.
    • Benefit: Enhances understanding and participation in a diverse classroom, enabling real-time multilingual communication.
  3. Healthcare Communication:
    • Scenario: A doctor communicates with a patient who speaks a different language.
    • Process: The doctor’s silent lip movements are captured and converted into expressive audio in the patient's language using the doctor’s vocal profile.
    • Benefit: Ensures clear and real-time communication without relying on audio input, bridging language gaps between healthcare providers and patients.
  4. International Business Meetings:
    • Scenario: Executives from different countries participate in an in-person meeting.
    • Process: Each participant’s silent lip movements are captured and converted into expressive audio in the other participants' languages using their vocal profiles.
    • Benefit: Facilitates real-time, multilingual discussions, improving collaboration and decision-making across different languages.
Problem Worth Solving?

submitted by archiegillis to Entrepreneur [link] [comments]


2024.05.14 18:14 archiegillis Seeking Feedback on Real-Time Multilingual Communication System

Hey Reddit community,
I'm currently doing some customer discovery for an new communication system that I’ve been working on, and I’d love to get your feedback!
The Innovation: My invention utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio using the user's vocal profile. The core feature of this system is that it enables real-time, in-person multilingual communication with no delay in conversations. The silent input does not interfere with the audio output, ensuring seamless and immediate translation.
Patent Pending: I have patent pending status for this technology and am in the process of determining if it truly addresses a significant pain point for potential users.
The Question: I’m particularly interested in understanding whether or not delays in current translation and communication systems are a major issue for you. Do you find it bothersome when there's a lag between spoken input and translated output? How much of a difference would a system with no delay make in your daily communications, whether it be in customer service, education, healthcare, or business meetings?
Use Cases: Here are a few scenarios where this technology could be particularly beneficial:
  1. Multilingual Customer Service: Seamless communication with international clients in real time.
  2. Educational Settings: Real-time translation for multilingual classrooms.
  3. Healthcare: Bridging language gaps between healthcare providers and patients.
  4. International Business Meetings: Facilitating real-time, multilingual discussions.
Your Feedback: I would greatly appreciate your insights on the following:
Your feedback will be invaluable in helping me refine this technology to better meet the needs of potential users.
Thanks in advance for your help!
....a bit more info...
Abstract: The invention focuses on a novel communication system that utilizes lip-reading technology to convert silent lip movements and facial expressions into expressive audio that uses the user's vocal profile for real-time communications in different languages. While it leverages Large Language Models (LLMs) and text conversion, the core feature is enabling real-time, in-person multilingual communication. (Think Google Glasses).
Key Application: Real-Time Multilingual Communication: Converts visual data into expressive audio in real-time using the user's vocal profile, facilitating in-person conversations in different languages.
Novel Elements
  1. Visual to Expressive Audio Conversion: Utilizes lip-reading algorithms to transform silent lip movements into expressive audio.
  2. Real-Time Multilingual Translation: Converts visual data into different languages in real time, using the user's vocal profile.
  3. Personalized Voice Synthesis: Synthesizes the translated audio with personalized voice profiles, retaining emotional nuances from visual input.
  4. Input Does Not Interfere with Output: A listener can hear the communication as it is being said and with much less delay making the conversation flow more seamlessly.
Use Cases
  1. Multilingual Customer Service:
    • Scenario: A customer service representative communicates with international clients in person.
    • Process: The representative’s silent lip movements are captured and converted into expressive audio in the client's language using the representative's vocal profile.
    • Benefit: Facilitates seamless in-person communication across different languages without the need for audio input from the speaker.
  2. Educational Settings:
    • Scenario: A multilingual classroom where the teacher instructs students from various linguistic backgrounds.
    • Process: The teacher’s lip movements are captured and converted into expressive audio in the students’ preferred languages, using the teacher's vocal profile.
    • Benefit: Enhances understanding and participation in a diverse classroom, enabling real-time multilingual communication.
  3. Healthcare Communication:
    • Scenario: A doctor communicates with a patient who speaks a different language.
    • Process: The doctor’s silent lip movements are captured and converted into expressive audio in the patient's language using the doctor’s vocal profile.
    • Benefit: Ensures clear and real-time communication without relying on audio input, bridging language gaps between healthcare providers and patients.
  4. International Business Meetings:
    • Scenario: Executives from different countries participate in an in-person meeting.
    • Process: Each participant’s silent lip movements are captured and converted into expressive audio in the other participants' languages using their vocal profiles.
    • Benefit: Facilitates real-time, multilingual discussions, improving collaboration and decision-making across different languages.
Problem Worth Solving?

submitted by archiegillis to TranslationStudies [link] [comments]


2024.05.14 18:13 cryptokaykay What are your current challenges with evaluations

What are your current challenges with evaluations?
What challenges are you facing and what tools are you using? I am thinking about building out a developer friendly open source evaluations tool kit. Thinking of starting with a simple interface where you pass the context, input, output and expected output and run it through some basic tests - both LLM based and non LLM based and also allow the ability to write custom assertions.
But, am wondering if you all have any insights into what other capabilities might be useful.
submitted by cryptokaykay to PromptEngineering [link] [comments]


2024.05.14 18:13 cryptokaykay What are you current challenges with evaluations?

What are your current challenges with evaluations?
What challenges are you facing and what tools are you using? I am thinking about building out a developer friendly open source evaluations tool kit. Thinking of starting with a simple interface where you pass the context, input, output and expected output and run it through some basic tests - both LLM based and non LLM based and also allow the ability to write custom assertions.
But, am wondering if you all have any insights into what other capabilities might be useful.
submitted by cryptokaykay to LocalLLaMA [link] [comments]


2024.05.14 18:12 AliveNet5570 Offering: English Seeking: French

Hello Language Exchange,
I'm an 18 year old English speaker currently learning French (at B1/B2 I think?), and I want to start doing output - I'm decently good at input so far, to the point where I've read books in French, but I still have no confidence in my output, so I want to help build that up via text.
And of course, similarly offering to any English learners an offer to help them learn English like that - I've got a (fairly rare?) dialect of British English so may be useful to learn some of the more non-standard parts of English common to native speakers if you're interested in that as well 👀
submitted by AliveNet5570 to language_exchange [link] [comments]


2024.05.14 18:12 Jpwolfe99 PyMuPdf doesn't recognize every fillable element in a PDF form

I am trying to use Python to read in a PDF form so that I can fill all the elements and then create a new filled in PDF. I found code from this repo and everything works correctly for the most part, but some elements aren't being recognized. Below is what the form looks like when I am editing the elements: https://i.sstatic.net/oTn5wiGA.png
However, when I run my code most but not all of the elements get filled in. In this example I am filling each box with "STRING". https://i.sstatic.net/AQn06u8J.png
In my code, when I list all of the element names ("other, route_to_1, route_to_2, etc) all the names are correct and have been checked over and over. When I debug my code and look at the variable that stores all the form elements, it's simply misreading some of the elements. I am not sure what is causing this. Whether Acrobat made the form incorrectly, or if there's a problem with the code. Any help is appreciated. Here's the code I have:
create_pdf.py
from pdf_processing import ProcessPdf DATA_OBJECT = { "other": "string", "route_to_1": "string", "route_to_2": "string", "route_to_3": "string", "route_to_4": "string", "route_to_5": "string", "route_to_6": "string", "route_to_7": "string", "route_to_8": "string", "route_to_9": "string", "route_to_10": "string", "route_to_11": "string", "route_to_alt_1": "string", "route_to_alt_2": "string", "route_to_alt_3": "string", "route_to_alt_4": "string", "route_to_alt_5": "string", "dep_aerodrome": "string", "dep_elev": "string", "dep_atis_id": "string", "dep_atis_freq": "string", "dest_aerodrome": "string", "dest_elev": "string", "alt_dest": "string", "alt_elev": "string", "chan_id_1": "string", "chan_freq_1": "string", "chan_id_2": "string", "chan_freq_2": "string", "chan_id_3": "string", "chan_freq_3": "string", "chan_id_4": "string", "chan_freq_4": "string", "chan_id_5": "string", "chan_freq_5": "string", "chan_id_6": "string", "chan_freq_6": "string", "chan_id_7": "string", "chan_freq_7": "string", "chan_id_8": "string", "chan_freq_8": "string", "chan_id_9": "string", "chan_freq_9": "string", "chan_id_10": "string", "chan_freq_10": "string", "chan_id_11": "string", "chan_freq_11": "string", "chan_id_alt_1": "string", "chan_freq_alt_1": "string", "chan_id_alt_2": "string", "chan_freq_alt_2": "string", "chan_id_alt_3": "string", "chan_freq_alt_3": "string", "chan_id_alt_4": "string", "chan_freq_alt_4": "string", "chan_id_alt_5": "string", "chan_freq_alt_5": "string", "course_1": "string", "course_2": "string", "course_3": "string", "course_4": "string", "course_5": "string", "course_6": "string", "course_7": "string", "course_8": "string", "course_9": "string", "course_10": "string", "course_11": "string", "course_alt_1": "string", "course_alt_2": "string", "course_alt_3": "string", "course_alt_4": "string", "course_alt_5": "string", "dep_clearance_id": "string", "dep_clearance_freq": "string", "time_off": "string", "dep_app_cont_id": "string", "dep_app_cont_freq": "string", "dist_1": "string", "dist_2": "string", "dist_3": "string", "dist_4": "string", "dist_5": "string", "dist_6": "string", "dist_7": "string", "dist_8": "string", "dist_9": "string", "dist_10": "string", "dist_11": "string", "dist_total": "string", "alt_route": "string", "alt_app_cont_id": "string", "alt_app_cont_freq": "string", "dist_alt_1": "string", "dist_alt_2": "string", "dist_alt_3": "string", "dist_alt_4": "string", "dist_alt_5": "string", "ete_1": "string", "ete_2": "string", "ete_3": "string", "ete_4": "string", "ete_5": "string", "ete_6": "string", "ete_7": "string", "ete_8": "string", "ete_9": "string", "ete_10": "string", "ete_11": "string", "ete_total": "string", "ete_alt_1": "string", "ete_alt_2": "string", "ete_alt_3": "string", "ete_alt_4": "string", "ete_alt_5": "string", "eta_1": "string", "ata_1": "string", "eta_2": "string", "ata_2": "string", "eta_3": "string", "ata_3": "string", "eta_4": "string", "ata_4": "string", "eta_5": "string", "ata_5": "string", "eta_6": "string", "ata_6": "string", "eta_7": "string", "ata_7": "string", "eta_8": "string", "ata_8": "string", "eta_9": "string", "ata_9": "string", "eta_10": "string", "ata_10": "string", "eta_11": "string", "ata_11": "string", "eta_total": "string", "ata_total": "string", "eta_alt_1": "string", "ata_alt_1": "string", "eta_alt_2": "string", "ata_alt_2": "string", "eta_alt_3": "string", "ata_alt_3": "string", "eta_alt_4": "string", "ata_alt_4": "string", "eta_alt_5": "string", "ata_alt_5": "string", "dep_gnd_cont_id": "string", "dep_gnd_cont_freq": "string", "tas": "string", "mach": "string", "dest_tower_id": "string", "dest_tower_freq": "string", "leg_fuel_1": "string", "leg_fuel_2": "string", "leg_fuel_3": "string", "leg_fuel_4": "string", "leg_fuel_5": "string", "leg_fuel_6": "string", "leg_fuel_7": "string", "leg_fuel_8": "string", "leg_fuel_9": "string", "leg_fuel_10": "string", "leg_fuel_11": "string", "leg_fuel_total": "string", "alt_altitude": "string", "alt_tower_id": "string", "alt_tower_freq": "string", "leg_fuel_alt_1": "string", "leg_fuel_alt_2": "string", "leg_fuel_alt_3": "string", "leg_fuel_alt_4": "string", "leg_fuel_alt_5": "string", "efr_1": "string", "afr_1": "string", "efr_2": "string", "afr_2": "string", "efr_3": "string", "afr_3": "string", "efr_4": "string", "afr_4": "string", "efr_5": "string", "afr_5": "string", "efr_6": "string", "afr_6": "string", "efr_7": "string", "afr_7": "string", "efr_8": "string", "afr_8": "string", "efr_9": "string", "afr_9": "string", "efr_10": "string", "afr_10": "string", "efr_11": "string", "afr_11": "string", "efr_total": "string", "afr_total": "string", "efr_alt_1": "string", "afr_alt_1": "string", "efr_alt_2": "string", "afr_alt_2": "string", "efr_alt_3": "string", "afr_alt_3": "string", "efr_alt_4": "string", "afr_alt_4": "string", "efr_alt_5": "string", "afr_alt_5": "string", "cont_fuel": "string", "cont_fuel_1": "string", "cont_fuel_2": "string", "cont_fuel_3": "string", "cont_fuel_4": "string", "cont_fuel_5": "string", "cont_fuel_6": "string", "cont_fuel_7": "string", "cont_fuel_8": "string", "cont_fuel_9": "string", "cont_fuel_10": "string", "cont_fuel_11": "string", "alt_fuel": "string", "cont_fuel_alt_1": "string", "cont_fuel_alt_2": "string", "cont_fuel_alt_3": "string", "cont_fuel_alt_4": "string", "cont_fuel_alt_5": "string", "dep_tower_id": "string", "dep_tower_freq": "string", "lbs_ph": "string", "lbs_pm": "string", "dest_gnd_cont_id": "string", "dest_gnd_cont_freq": "string", "notes_1": "string", "notes_2": "string", "notes_3": "string", "notes_4": "string", "notes_5": "string", "notes_6": "string", "notes_7": "string", "notes_8": "string", "notes_9": "string", "notes_10": "string", "notes_11": "string", "notes_12": "string", "alt_gnd_cont_id": "string", "alt_gnd_cont_freq": "string", "notes_alt_1": "string", "notes_alt_2": "string", "notes_alt_3": "string", "notes_alt_4": "string", "notes_alt_5": "string", "alt_time": "string", "route_dest_iaf_fuel": "string", "route_alt_iaf_fuel": "string", "approaches_fuel": "string", "in_air_used_fuel": "string", "reserve_fuel": "string", "rwy_length_dest": "string", "lighting_dest": "string", "fuel_dest": "string", "ils_dest": "string", "loc_dest": "string", "asr_dest": "string", "par_mins_dest": "string", "tac_mins_dest": "string", "arr_gear_dest": "string", "pubs_dest": "string", "notams_dest": "string", "fuel_packet_dest_1": "string", "fuel_packet_dest_2": "string", "fuel_packet_dest_3": "string", "fuel_packet_dest_4": "string", "etc_dest": "string", "last_cruise_req_fuel": "string", "map_to_iaf_req_fuel": "string", "bingo_req_fuel": "string", "last_cruise_appr_fuel": "string", "map_to_iaf_appr_fuel": "string", "rwy_length_alt": "string", "lighting_alt": "string", "fuel_alt": "string", "ils_alt": "string", "loc_alt": "string", "asr_alt": "string", "par_mins_alt": "string", "tac_mins_alt": "string", "arr_gear_alt": "string", "pubs_alt": "string", "notams_alt": "string", "fuel_packet_alt_1": "string", "fuel_packet_alt_2": "string", "fuel_packet_alt_3": "string", "fuel_packet_alt_4": "string", "etc_alt": "string", "last_cruise_res_fuel": "string", "map_to_iaf_fuel": "string", "add_res_fuel": "string", "stto_fuel": "string", "total_req_fuel": "string", "total_aboard_fuel": "string", "spare_fuel": "string", "last_cruise_total_fuel": "string", "map_to_iaf_total_fuel": "string", "bingo_total": "string", "waypoint_1": "string", "waypoint_2": "string", "waypoint_3": "string", "waypoint_4": "string", "waypoint_5": "string", "waypoint_6": "string", "waypoint_7": "string", "waypoint_8": "string", "waypoint_9": "string", "waypoint_10": "string", "waypoint_11": "string", "waypoint_12": "string", "waypoint_13": "string", "waypoint_14": "string", "waypoint_15": "string", "waypoint_16": "string", "clearance_cleared_to": "string", "clearance_altitude": "string", "clearance_freq": "string", "clearance_transp": "string", "clearance_route": "string" } data = DATA_OBJECT output_file = 'final_pdf.pdf' temp_files = [] pdf = ProcessPdf('pdf_temp/', output_file) ''' PDF_TEMPLATE_PATH = path/to/your.pdf ''' data_pdf = pdf.add_data_to_pdf("Blank Jet Log Fillable.pdf", data) temp_files.append(data_pdf) 
pdf_processing.py
import os import re import fitz # requires fitz, PyMuPDF import pdfrw import subprocess import os.path import sys from PIL import Image ''' replace all the constants (the one in caps) with your own lists ''' ''' FORM_KEYS is a dictionary (key-value pair) that contains 1. keys - which are all the key names in the PDF form 2. values - which are the type for all the keys in the PDF form. (string, checkbox, etc.) Eg. PDF form contains 1. First Name 2. Last Name 3. Sex (Male or Female) 4. Mobile Number FORM_KEYS = { "fname": "string", "lname": "string", "sex": "checkbox", "mobile": "number" } This FORM_KEYS(key) returns the type of value for that key. I'm passing this as 2nd argument to encode_pdf_string() function. ''' FORM_KEYS = { "other": "string", "route_to_1": "string", "route_to_2": "string", "route_to_3": "string", "route_to_4": "string", "route_to_5": "string", "route_to_6": "string", "route_to_7": "string", "route_to_8": "string", "route_to_9": "string", "route_to_10": "string", "route_to_11": "string", "route_to_alt_1": "string", "route_to_alt_2": "string", "route_to_alt_3": "string", "route_to_alt_4": "string", "route_to_alt_5": "string", "dep_aerodrome": "string", "dep_elev": "string", "dep_atis_id": "string", "dep_atis_freq": "string", "dest_aerodrome": "string", "dest_elev": "string", "alt_dest": "string", "alt_elev": "string", "chan_id_1": "string", "chan_freq_1": "string", "chan_id_2": "string", "chan_freq_2": "string", "chan_id_3": "string", "chan_freq_3": "string", "chan_id_4": "string", "chan_freq_4": "string", "chan_id_5": "string", "chan_freq_5": "string", "chan_id_6": "string", "chan_freq_6": "string", "chan_id_7": "string", "chan_freq_7": "string", "chan_id_8": "string", "chan_freq_8": "string", "chan_id_9": "string", "chan_freq_9": "string", "chan_id_10": "string", "chan_freq_10": "string", "chan_id_11": "string", "chan_freq_11": "string", "chan_id_alt_1": "string", "chan_freq_alt_1": "string", "chan_id_alt_2": "string", "chan_freq_alt_2": "string", "chan_id_alt_3": "string", "chan_freq_alt_3": "string", "chan_id_alt_4": "string", "chan_freq_alt_4": "string", "chan_id_alt_5": "string", "chan_freq_alt_5": "string", "course_1": "string", "course_2": "string", "course_3": "string", "course_4": "string", "course_5": "string", "course_6": "string", "course_7": "string", "course_8": "string", "course_9": "string", "course_10": "string", "course_11": "string", "course_alt_1": "string", "course_alt_2": "string", "course_alt_3": "string", "course_alt_4": "string", "course_alt_5": "string", "dep_clearance_id": "string", "dep_clearance_freq": "string", "time_off": "string", "dep_app_cont_id": "string", "dep_app_cont_freq": "string", "dist_1": "string", "dist_2": "string", "dist_3": "string", "dist_4": "string", "dist_5": "string", "dist_6": "string", "dist_7": "string", "dist_8": "string", "dist_9": "string", "dist_10": "string", "dist_11": "string", "dist_total": "string", "alt_route": "string", "alt_app_cont_id": "string", "alt_app_cont_freq": "string", "dist_alt_1": "string", "dist_alt_2": "string", "dist_alt_3": "string", "dist_alt_4": "string", "dist_alt_5": "string", "ete_1": "string", "ete_2": "string", "ete_3": "string", "ete_4": "string", "ete_5": "string", "ete_6": "string", "ete_7": "string", "ete_8": "string", "ete_9": "string", "ete_10": "string", "ete_11": "string", "ete_total": "string", "ete_alt_1": "string", "ete_alt_2": "string", "ete_alt_3": "string", "ete_alt_4": "string", "ete_alt_5": "string", "eta_1": "string", "ata_1": "string", "eta_2": "string", "ata_2": "string", "eta_3": "string", "ata_3": "string", "eta_4": "string", "ata_4": "string", "eta_5": "string", "ata_5": "string", "eta_6": "string", "ata_6": "string", "eta_7": "string", "ata_7": "string", "eta_8": "string", "ata_8": "string", "eta_9": "string", "ata_9": "string", "eta_10": "string", "ata_10": "string", "eta_11": "string", "ata_11": "string", "eta_total": "string", "ata_total": "string", "eta_alt_1": "string", "ata_alt_1": "string", "eta_alt_2": "string", "ata_alt_2": "string", "eta_alt_3": "string", "ata_alt_3": "string", "eta_alt_4": "string", "ata_alt_4": "string", "eta_alt_5": "string", "ata_alt_5": "string", "dep_gnd_cont_id": "string", "dep_gnd_cont_freq": "string", "tas": "string", "mach": "string", "dest_tower_id": "string", "dest_tower_freq": "string", "leg_fuel_1": "string", "leg_fuel_2": "string", "leg_fuel_3": "string", "leg_fuel_4": "string", "leg_fuel_5": "string", "leg_fuel_6": "string", "leg_fuel_7": "string", "leg_fuel_8": "string", "leg_fuel_9": "string", "leg_fuel_10": "string", "leg_fuel_11": "string", "leg_fuel_total": "string", "alt_altitude": "string", "alt_tower_id": "string", "alt_tower_freq": "string", "leg_fuel_alt_1": "string", "leg_fuel_alt_2": "string", "leg_fuel_alt_3": "string", "leg_fuel_alt_4": "string", "leg_fuel_alt_5": "string", "efr_1": "string", "afr_1": "string", "efr_2": "string", "afr_2": "string", "efr_3": "string", "afr_3": "string", "efr_4": "string", "afr_4": "string", "efr_5": "string", "afr_5": "string", "efr_6": "string", "afr_6": "string", "efr_7": "string", "afr_7": "string", "efr_8": "string", "afr_8": "string", "efr_9": "string", "afr_9": "string", "efr_10": "string", "afr_10": "string", "efr_11": "string", "afr_11": "string", "efr_total": "string", "afr_total": "string", "efr_alt_1": "string", "afr_alt_1": "string", "efr_alt_2": "string", "afr_alt_2": "string", "efr_alt_3": "string", "afr_alt_3": "string", "efr_alt_4": "string", "afr_alt_4": "string", "efr_alt_5": "string", "afr_alt_5": "string", "cont_fuel": "string", "cont_fuel_1": "string", "cont_fuel_2": "string", "cont_fuel_3": "string", "cont_fuel_4": "string", "cont_fuel_5": "string", "cont_fuel_6": "string", "cont_fuel_7": "string", "cont_fuel_8": "string", "cont_fuel_9": "string", "cont_fuel_10": "string", "cont_fuel_11": "string", "alt_fuel": "string", "cont_fuel_alt_1": "string", "cont_fuel_alt_2": "string", "cont_fuel_alt_3": "string", "cont_fuel_alt_4": "string", "cont_fuel_alt_5": "string", "dep_tower_id": "string", "dep_tower_freq": "string", "lbs_ph": "string", "lbs_pm": "string", "dest_gnd_cont_id": "string", "dest_gnd_cont_freq": "string", "notes_1": "string", "notes_2": "string", "notes_3": "string", "notes_4": "string", "notes_5": "string", "notes_6": "string", "notes_7": "string", "notes_8": "string", "notes_9": "string", "notes_10": "string", "notes_11": "string", "notes_12": "string", "alt_gnd_cont_id": "string", "alt_gnd_cont_freq": "string", "notes_alt_1": "string", "notes_alt_2": "string", "notes_alt_3": "string", "notes_alt_4": "string", "notes_alt_5": "string", "alt_time": "string", "route_dest_iaf_fuel": "string", "route_alt_iaf_fuel": "string", "approaches_fuel": "string", "in_air_used_fuel": "string", "reserve_fuel": "string", "rwy_length_dest": "string", "lighting_dest": "string", "fuel_dest": "string", "ils_dest": "string", "loc_dest": "string", "asr_dest": "string", "par_mins_dest": "string", "tac_mins_dest": "string", "arr_gear_dest": "string", "pubs_dest": "string", "notams_dest": "string", "fuel_packet_dest_1": "string", "fuel_packet_dest_2": "string", "fuel_packet_dest_3": "string", "fuel_packet_dest_4": "string", "etc_dest": "string", "last_cruise_req_fuel": "string", "map_to_iaf_req_fuel": "string", "bingo_req_fuel": "string", "last_cruise_appr_fuel": "string", "map_to_iaf_appr_fuel": "string", "rwy_length_alt": "string", "lighting_alt": "string", "fuel_alt": "string", "ils_alt": "string", "loc_alt": "string", "asr_alt": "string", "par_mins_alt": "string", "tac_mins_alt": "string", "arr_gear_alt": "string", "pubs_alt": "string", "notams_alt": "string", "fuel_packet_alt_1": "string", "fuel_packet_alt_2": "string", "fuel_packet_alt_3": "string", "fuel_packet_alt_4": "string", "etc_alt": "string", "last_cruise_res_fuel": "string", "map_to_iaf_fuel": "string", "add_res_fuel": "string", "stto_fuel": "string", "total_req_fuel": "string", "total_aboard_fuel": "string", "spare_fuel": "string", "last_cruise_total_fuel": "string", "map_to_iaf_total_fuel": "string", "bingo_total": "string", "waypoint_1": "string", "waypoint_2": "string", "waypoint_3": "string", "waypoint_4": "string", "waypoint_5": "string", "waypoint_6": "string", "waypoint_7": "string", "waypoint_8": "string", "waypoint_9": "string", "waypoint_10": "string", "waypoint_11": "string", "waypoint_12": "string", "waypoint_13": "string", "waypoint_14": "string", "waypoint_15": "string", "waypoint_16": "string", "clearance_cleared_to": "string", "clearance_altitude": "string", "clearance_freq": "string", "clearance_transp": "string", "clearance_route": "string" } def encode_pdf_string(value, type): if type == 'string': if value: return pdfrw.objects.pdfstring.PdfString.encode(value.upper()) else: return pdfrw.objects.pdfstring.PdfString.encode('') elif type == 'checkbox': if value == 'True' or value == True: return pdfrw.objects.pdfname.BasePdfName('/Yes') # return pdfrw.objects.pdfstring.PdfString.encode('Y') else: return pdfrw.objects.pdfname.BasePdfName('/No') # return pdfrw.objects.pdfstring.PdfString.encode('') return '' class ProcessPdf: def __init__(self, temp_directory, output_file): print('\n########## Initiating Pdf Creation Process #########\n') print('\nDirectory for storing all temporary files is: ', temp_directory) self.temp_directory = temp_directory print("Final Pdf name will be: ", output_file) self.output_file = output_file def add_data_to_pdf(self, template_path, data): print('\nAdding data to pdf...') template = pdfrw.PdfReader(template_path) for page in template.pages: annotations = page['/Annots'] if annotations is None: continue for annotation in annotations: if annotation['/Subtype'] == '/Widget': if annotation['/T']: key = annotation['/T'][1:-1] if re.search(r'.-[0-9]+', key): key = key[:-2] if key in data: annotation.update( pdfrw.PdfDict(V=encode_pdf_string(data[key], FORM_KEYS[key])) ) annotation.update(pdfrw.PdfDict(Ff=1)) template.Root.AcroForm.update(pdfrw.PdfDict(NeedAppearances=pdfrw.PdfObject('true'))) pdfrw.PdfWriter().write(self.temp_directory + "data.pdf", template) print('Pdf saved') return self.temp_directory + "data.pdf" def convert_image_to_pdf(self, image_path, image_pdf_name): print('\nConverting image to pdf...') image = Image.open(image_path) image_rgb = image.convert('RGB') image_rgb.save(self.temp_directory + image_pdf_name) return self.temp_directory + image_pdf_name def add_image_to_pdf(self, pdf_path, images, positions): print('\nAdding images to Pdf...') file_handle = fitz.open(pdf_path) for position in positions: page = file_handle[int(position['page']) - 1] if not position['image'] in images: continue image = images[position['image']] page.insertImage( fitz.Rect(position['x0'], position['y0'], position['x1'], position['y1']), filename=image ) file_handle.save(self.temp_directory + "data_image.pdf") print('images added') return self.temp_directory + "data_image.pdf" def delete_temp_files(self, pdf_list): print('\nDeleting Temporary Files...') for path in pdf_list: try: os.remove(path) except: pass def compress_pdf(self, input_file_path, power=3): """Function to compress PDF via Ghostscript command line interface""" quality = { 0: '/default', 1: '/prepress', 2: '/printer', 3: '/ebook', 4: '/screen' } output_file_path = self.temp_directory + 'compressed.pdf' if not os.path.isfile(input_file_path): print("\nError: invalid path for input PDF file") sys.exit(1) if input_file_path.split('.')[-1].lower() != 'pdf': print("\nError: input file is not a PDF") sys.exit(1) print("\nCompressing PDF...") initial_size = os.path.getsize(input_file_path) subprocess.call(['gs', '-sDEVICE=pdfwrite', '-dCompatibilityLevel=1.4', '-dPDFSETTINGS={}'.format(quality[power]), '-dNOPAUSE', '-dQUIET', '-dBATCH', '-sOutputFile={}'.format(output_file_path), input_file_path] ) final_size = os.path.getsize(output_file_path) ratio = 1 - (final_size / initial_size) print("\nCompression by {0:.0%}.".format(ratio)) print("Final file size is {0:.1f}MB".format(final_size / 1000000)) return output_file_path 
submitted by Jpwolfe99 to learnpython [link] [comments]


2024.05.14 18:12 cryptokaykay What are your current challenges with evaluations?

What challenges are you facing and what tools are you using? I am thinking about building out a developer friendly open source evaluations tool kit. Thinking of starting with a simple interface where you pass the context, input, output and expected output and run it through some basic tests - both LLM based and non LLM based and also allow the ability to write custom assertions.
But, am wondering if you all have any insights into what other capabilities might be useful.
submitted by cryptokaykay to LangChain [link] [comments]


2024.05.14 17:56 kfspai Spice v0.12.2-alpha (May 13, 2024) is now available!

The v0.12.2-alpha release introduces data streaming and key-pair authentication for the Snowflake data connector, enables general append mode data refreshes for time-series data, improves connectivity error messages, adds nested folders support for the S3 data connector, and exposes nodeSelector and affinity keys in the Helm chart for better Kubernetes management.

Highlights

Breaking Changes

Contributors

What's Changed

Full Changelog: https://github.com/spiceai/spiceai/compare/v0.12.1-alpha...v0.12.2-alpha
submitted by kfspai to spiceai [link] [comments]


2024.05.14 17:56 FAGADEEE Need help with charger

The default charger I received with my laptop had a wattage of 135w and 19v and 1.9 A input I bought a replacement with an input current of 2.5 A with both the chargers having same output specifications will this do any harm to my laptop so far its only having heat issues my model is acer nitro 5 AN515-54
submitted by FAGADEEE to AcerNitro [link] [comments]


2024.05.14 17:46 lazyhorsee Found a good ui library that goes well with htmx (mdui)

By coincidence I found a clone of material design 3 ui library that are html based, with some javascript that I could integrate nicely with my htmx frontend.
You could use the cdn and be done with it, but it's also possible to use their npm package and only pick whatever you need in your application and then use a bundler like esbuild to package the components you used into one file, like this:
pnpm exec esbuild ./app/static_dev/input.js --bundle --outfile=./app/static/js/output.js
Of course you should first initialize a npm project, then install mdui & esbuild.
The docs are nice, I don't see anyone mention this library on this subreddit but it's a good find imo.
Hopefully this helps somebody who wants a material design ui but can't find one that integrates with raw html.
submitted by lazyhorsee to htmx [link] [comments]


2024.05.14 17:42 AnkiHubOfficial 👑 AnKing Step Deck Update #5

👑 AnKing Step Deck Update #5
Check out the update here: https://community.ankihub.net/t/anking-step-deck-update-5/222542
Make sure to participate in the poll as well: https://community.ankihub.net/t/anking-step-deck-update-5/222542#poll-of-the-month-16

👑 AnKing Step Deck Update #5 (April 13th - May 14th)

Hi everyone! 👋
Hope you are all having an amazing month!
Let’s catch you up on what’s been going on every time you click the sync button

🎉 27,535 note updates!

🫶 3,527 new subscribers!

✅ Deck Updates

❓Question Banks

★ NBME: New tags added for OBGYN CMS form 5 (thanks to @taylordugan). Find it under this tag:#AK_Step2_v12::#Resources_by_rotation::ObGyn::nbme::form_5
★ AMBOSS: New step 2 self-assessment tag added! (thanks to @taylordugan)
★ UWorld Self Assessment: Step 1 UWSA #3 has been tagged! (thanks to @herstein.jacob)
★ Step 3 UWorld Tags: New Step 3 UWorld tags added for various QIDs (thanks to @dollajas)!

🎇 Sketchy & Pixorize

★ SketchyPathology: New tags added for missing cards (thanks to @joshuamb)
★ SketchyPhysiology: Tons of new images + tags + hyperlinks added for various videos (thanks to @epcase)
★ Sketchy: 100s of pre-existing screenshots updated with higher quality screenshots (thanks to @musamalik)
★ Pixorize: 100+ images and hyperlinks added, thanks to the official Pixorize team!

🎥 Video Resources

★ BNB Step 1: New tags added for missing cards in antihypertensive video (thanks to @lawsonspence)
★ BNB Step 2: New tags added for many gastroenterology videos (thanks to @a11exa)
★ Bootcamp: 100+ tags and hyperlinks added to various cards (thanks to the official Bootcamp team!

😋 Other

★ PANCE: 1000+ new tags added! (thanks to @camicardona)
★ New Addon: A brand new AnKing table addon for formatting is out! Use this addon to format existing AnKing tables (thanks to @shmuelsash for creating the addon!)
★ GIFs: GIFs displaying clinical signs have also been added (Relative afferent pupillary defect, CN VI palsy, etc.)
The list above does not include the 1000s of spelling, grammar, formatting, image, GIF additions and changes the community (you all) have submitted!

📈 Project Progress

🎉 OnlineMedEd (OME) Project

21,000+ updated hyperlinks have been added. Tags will also roll out in the future!
🚨 Don’t miss out on this exclusive 25% discount on a multi-month membership to OME: ANKING25

🧠 Algorithm Card Project

A new algorithm card covering the workup for blunt abdominal trauma was pushed out (thanks to @Sameem!)
Also check out the accompanying management flow chart made by @beejumm!
https://preview.redd.it/fncfjdh7we0d1.jpg?width=2070&format=pjpg&auto=webp&s=94b0d678b8a3786cf1f11f801c984adb888a3a28
https://preview.redd.it/sg6425h7we0d1.jpg?width=2912&format=pjpg&auto=webp&s=5a59b1caec4dbcb55d348c3e05feef3573da100e

🎨 Illustration Projects

@beejumm and @ianthebfg created some gorgeous illustrations to aid in your learning! Check them out:
https://preview.redd.it/qtjgowdewe0d1.jpg?width=2489&format=pjpg&auto=webp&s=834d207d35db9a38fbc08d9de25383993fbe36e7
https://preview.redd.it/48q7asdewe0d1.jpg?width=2475&format=pjpg&auto=webp&s=f7d482c33c96bbec30df8d05a2797c69a0d2a341
https://preview.redd.it/dzkg5vdewe0d1.jpg?width=1782&format=pjpg&auto=webp&s=e541bdc4fce48f3d4ac7871d0778fcad9fdd28f1
https://preview.redd.it/p0si9rdewe0d1.jpg?width=10240&format=pjpg&auto=webp&s=734b593a22c527d1651fb8d571f6b48ba092dd2f

🫶🏼 Community Shoutouts

A few community members were outstanding with their suggestions this month and we want to highlight their dedication!
Top 5 community members with the most suggestions accepted in the last 30 days:
  1. @camicardona (4,603)
  2. @mohannadkh10 (1,192)
  3. @a11exa (434)
  4. @epcase (369)
  5. @taylordugan (290)
Thank you to everyone who submitted a suggestion this month!

👨‍🔧 New Maintainer

We’re happy to announce this month’s new maintainer! This user has dedicated a ton of time submitting helpful suggestions for content changes/tag additions and general deck improvements. Please give a warm welcome to:
  1. @DillingerMed 🎉

📣 We Need Your Input!

We are looking for current or soon-to-be medical students to conduct a 45 minute virtual interview for research purposes. If you are interested, please sign up here ($25 Amazon gift card for those who complete the interview):
❗️Sign up if interested: AnkiHub User Study
We are also looking for more information regarding what type of curriculum your school hosts (systems-based, traditional histology/anatomy approach for M1 years vs PBL). This quick survey will help us improve AnkiHub in the coming months. It’s a 2-3 minute survey!
❗️Survey link: https://forms.gle/gDM9Dq1TG8cjq2GG6

❓Poll of the Month

Recently, we have started adding video hyperlinks to the extra section of certain cards, typically under a minute long, illustrating certain various physical exam findings. Some of these include:
Example:
https://preview.redd.it/vh6s3henwe0d1.jpg?width=2862&format=pjpg&auto=webp&s=fdfa04d3888ca856edf98cd0d0f0dc99a853b11f
We want to know more from you below (poll is anonymous)!
Vote here: https://community.ankihub.net/t/anking-step-deck-update-5/222542#poll-of-the-month-16

👋 Wrapping up

We hope you all enjoyed this month’s update!
Take care everyone ❤️
Regards, The AnKing Step Deck Maintainers ❤️

🔗 Useful Links

Want to make a suggestion? Follow the guidelines → AnKing Step Deck Submission Guidelines
Want to volunteer to tag/add images for Sketchy/Pixorize/Boards & Beyond Step 2 or volunteer to make illustrations for the AnKing deck? Send an email to → [anking.ahmedd@gmail.com](mailto:anking.ahmedd@gmail.com)
Get support from our team → https://community.ankihub.net
Frequently asked questions → FAQs - AnkiHub Community 20
Check out the AnKing Step Deck wiki → [Wiki] AnKing Overhaul for Step 1 & 2 by AnKingMed
Follow us on Instagram → The AnKing (@ankingmed) • Instagram photos and videos
submitted by AnkiHubOfficial to medicalschoolanki [link] [comments]


2024.05.14 17:35 Ooooooooosh Specific mini amp features

This is a new world to me so please excuse my ignorance. I've just binned an old stereo system and kept the speakers (passive), which I want to use to hopefully give my LG C2 TV a bit more boom. I've also got a smallish passive subwoofer. From what I can work out all I need is an amp, and going on the various reviews on YouTube I reckon one of the small Chinese mini amps will be good enough for me. I'm happy to spend about a hundred pounds to make it all work.
So what I think I need is an amp with an optical input and outputs for passive speakers and a passive subwoofer. This is the closest I can find, its just got RCA inputs instead of the digital one.
https://imgur.com/86XgqZp
The questions I put to you are; 1. Just to check that with an optical to RCA converter this would work? 2. I've googled and amazoned as much as my patience has allowed but can't find one with an optical in and a passive speakers and sub out... surely this must exist?! 3. If it doesn't exist then any suggestions on the best set up to get it all working? Better to go with the optical to RCA converter or get a separate amp for the subwoofer or something else entirely? 3. Honestly - is it worth it? I've got these speakers kicking about and just looking to do something with them.
Finally some more details about what I have; - LG C2 48" TV, it's got an optical output or a 3.5mm output. - Passive speakers from a Kenwood rxd980md, max input power 100w, impedance 6ohm. - Passive subwoofer from a broken Orbitsound spatial sound bar, model M10LX, minimum 4ohm, speaker wire connection.
And more about the amp I've seen on Amazon; Power Adapter: 24V/4.5A DC Input Range: 12-24V THD: ≤ 0.04%, SNR: ≥ 98dB Frequency Range: 20Hz - 20kHz (±1 dB) Input Sensitivity: ≤ 280mV; Terminating Impedance: 2 - 8 Ohm MAX Power Output: 50W x 2+100W Input Mode: Bluetooth and RCA Bluetooth Transmission Distance: Up to 50 Ft Package Include: BT3D Amp, Power Supply, User Manual
submitted by Ooooooooosh to hometheater [link] [comments]


2024.05.14 17:31 RoyalReverie New OpenAi e-mail

I have just received the following e-mail with a few more details from OpenAi:
Hi there,
We launched GPT-4o in the API—our new flagship model that’s as smart as GPT-4 Turbo and much more efficient. We’re passing on the benefits of the model’s efficiencies to developers, including:
GPT-4o in the API currently supports text and vision capabilities. It has better vision capabilities and improved support for non-English languages compared to GPT-4 Turbo. It has a 128k context window and has a knowledge cut-off date of October 2023. We plan to launch support for GPT-4o’s new audio and video capabilities in the API to a small group of trusted partners in the coming weeks.
We recommend that developers using GPT-4 or GPT-4 Turbo consider switching to GPT-4o. You can access GPT-4o in the Chat Completions API and Assistants API, or in the Batch API where you get a 50% discount on batch jobs completed asynchronously within 24 hours.
To get started, test the model in Playground, which now supports vision capabilities, and check out our API documentation. To learn how to use vision to input video content with GPT-4o today, check out the Introduction to GPT-4o cookbook. If you have questions, please reach out in the OpenAI developer forum.
—The OpenAI team
submitted by RoyalReverie to singularity [link] [comments]


2024.05.14 17:29 gob_magic Which smart glasses are easier to program for? MetaRayBan/TeamOSG/Brilliant Labs

I am building a solution to a problem I am facing, and after speaking to a few folks in sales/journalism I can see they may use it too. Funny to see this is already being worked on by Cayden; u/hackalackolot. Love their concept. Mine is much simpler, and specific use case without the eye overlay.
This fun demo is by Cayden: https://www.youtube.com/watch?v=3n6DzuYQ_v8&t=12s
However, for my use case I would love to know which hardware has an external audio input, audio output and vision input (speaker + mic + camera). No need for the AR eye display.
For prototyping I am planning on using one of these.
Mata Ray Ban Glasses: Cannot find much information on creating your own software over the hardware. Excellent mic and camera hardware. Pre-order for USD 450.
TeamOpenSmartGlasses (Vuzix Z100): Love their concept! Much similar to what I am working on. I will message them and see if there's a way to collaborate. The device they use is Vuzix Z100. USD 800
Brilliant Labs: Similar to the rest, has an AR display. USD 350
submitted by gob_magic to hardware [link] [comments]


2024.05.14 17:23 rapttured How do you export organic google search clicks by day?

this is the data I am looking for but I can't figure out how to convert it to table format
Under our "Google organic search traffic: Landing page + query string" report I see the chart pictured above. GA4 is obviously collecting data on organic search clicks by day, but when I create a CSV export, the data portrayed here isn't included.
For context, my boss is wanting to do paid search. She wants to know how many organic clicks we get by day on average. This chart is fine, but it doesn't indicate what is the best day of the week for organic search clicks on average. I want to export said data (CSV) so I can create a heat map, that way my boss can easily see what the best day of the week for organic clicks is by the shading.
I've checked similar reports and tried to create my own reports in acquisitions and user behavior, but I can't find the data I need except in the above chart. If push comes to shove I can manually input each data point into excel, but if someone knows how to export it instead, please share.
submitted by rapttured to GoogleAnalytics [link] [comments]


2024.05.14 17:15 WinbuzzerMaria How to Enable the Ultimate Performance Power Plan in Windows 11 and Windows 10

How to Enable the Ultimate Performance Power Plan in Windows 11 and Windows 10
https://preview.redd.it/9dnj4r61se0d1.png?width=768&format=png&auto=webp&s=3589f6a61841e58359af689712a2872c7713e08a
Table of Contents:
Optimizing system performance is critical in high-demand computing environments. For users engaged in tasks requiring maximum computational power, such as video editing, 3D rendering, or gaming, Windows 10 and Windows 11 offer the Ultimate Performance Power Plan. This feature maximizes system efficiency by reducing energy-saving constraints, making it ideal for workstations and high-performance PCs connected to a direct power source.

What is the Ultimate Performance Power Plan?

The Ultimate Performance Power Plan enhances system performance by fully utilizing hardware capabilities, albeit at the cost of increased power consumption and potential impacts on hardware longevity. This guide provides a concise overview of enabling this power plan on Windows systems, offering step-by-step instructions tailored for users aiming to leverage their PC's full potential.
  1. Processor Performance: The plan sets both the minimum and maximum processor state to 100%, ensuring that the CPU operates at its highest performance level at all times, regardless of the workload. This eliminates any potential throttling that could occur under lower power plans, providing consistent and maximum processing power for demanding applications.
  2. Hard Disk Settings: It prevents hard disks from being turned off to save power. This means that the hard disk remains in an active state, ready to quickly read and write data, which is beneficial for tasks that require frequent access to large files.
  3. System Cooling Policy: The plan alters the system cooling policy to ensure that the cooling is aggressive enough to handle the increased thermal output from the processor and other components running at full capacity. This can result in the cooling fans operating more frequently or at higher speeds.
  4. Sleep and Hibernate Settings: The Ultimate Performance Plan disables sleep and hibernation modes by default to ensure that the system remains active and ready for tasks at all times. This is particularly useful for systems that are used as servers or need to be available for remote access.
  5. USB Settings: It adjusts USB settings to prevent USB devices from being suspended, ensuring that connected devices, such as external hard drives and peripherals, are always ready for immediate use without any latency from power-saving modes.
  6. Graphics and Display Settings: The plan may also adjust settings related to graphics and display, such as disabling adaptive brightness and ensuring that video playback is optimized for quality, which can enhance the experience during multimedia consumption and production.
  7. Wireless Adapter Settings: For systems with wireless capabilities, the Ultimate Performance Plan sets the wireless adapter to maximum performance, reducing power-saving measures that could impact the stability and speed of wireless connections.
Note that the Ultimate Performance Power Plan is not universally beneficial for all computing tasks. While it provides significant advantages for high-intensity operations, it may not yield noticeable improvements for everyday applications and can lead to unnecessary power usage. Users should assess their specific needs and the nature of their computing tasks to determine the appropriateness of activating this power plan, ensuring an optimal balance between performance enhancement and resource utilization.
submitted by WinbuzzerMaria to winbuzzer [link] [comments]


2024.05.14 17:14 Ok-Breakfast-990 What do you look for in a looper?

I’m in the midst of a development project for a new multitrack loop station. My buddy and I were starting a new project and found that the current looper options on the market didn’t really satisfy our needs so we set out to make our own. All the ones we looked at were some combination of too expensive, too confusing or with not enough tracks/inputs.
Now it has come far enough that our little DIY project might become a full blown product. It will feature 8 tracks, some where from 4-8 inputs and outputs, and individual controls for each track. Our focus is primarily on live performance but the looper will also feature recording.
Since our music primarily features synths, and we are developing with that in mind, I thought I’d ask this sub what your experiences with loop stations are, what they were missing and what features you’d like to have?
Feel free to ask any questions about the project and I will do my best to answer them. I know this post is lacking in specific detail but that is because we are early enough in the development cycle to plan and implement large changes, I will be sure to share more information here in the future.
submitted by Ok-Breakfast-990 to synthesizers [link] [comments]


2024.05.14 17:11 Johnmayer000 Formula marking overdue tasks that are already done

So in a Notion table, I have a:
if(
prop("Status") == "Done",
"On time",
if(
prop("Deadline") < now(),
"Overdue",
"On time"
)
)
The problem is that when I add old tasks that have already been done in a date previous from today (since I'm just now making this Notion data base) it marks them overdue. I need a formula that only checks if the task hasn't been marked as "Done" after the date on the "deadline" column, to give me the overdue output; otherwise I need to get the on time output.
Please help!
submitted by Johnmayer000 to Notion [link] [comments]


2024.05.14 17:00 FreegheistOfficial GPT-4o is an Encoder-Decoder from the original Attention paper. Change my mind..

As we know, LLMs only represent the decoder part from the encoder-decoder transformer models in the original 'Attention is All You Need' paper.
Now we see a real-time version that can input/output audio, text, and images seamlessly, using a single model. This isn't possible in a pure-decoder LLM, but if we add the Encoder back in, it probably is:
So just like an LLM, where you pretrain the general knowledge and then fine-tune for specific behaviors like a chatbot, this new model adds the encoder to integrate multiple modes, makes it real-time, and is trained with a ton of live content. Voila, you get a completion-based version of "Her" (it's predicting what a "Her" would likely say next using a dynamic context window and decoding that autoregressively, just fast enough to synthesize it as realistic audio based on its training).
submitted by FreegheistOfficial to LocalLLaMA [link] [comments]


2024.05.14 16:58 PinguKuah Discord voice input not working

I used to be able to utilize my speakers for audio output and connect my headphones to my mac mini in order to talk in voice apps like discord. Never had to really workout anything, now suddenly discord can't find my voice, even though I've tried connecting my headphones directly into my scarlet 18i20.
So - is this due to a new update from discord or is there a specific way for me to go on Focusrite app and change the settings?
To clarify, the goal would be:
Audio output - speakers that are connected to my scarlett 18i20;
Voice input - headphones that are connected to my mac mini's audio port;
Scarlet 18i20 is connected to mac mini.
submitted by PinguKuah to Focusrite [link] [comments]


http://rodzice.org/