Diagram of metaphase 1 in mitosis

origami

2008.06.15 12:20 origami

Welcome to the new /origami. Do what you want.
[link]


2019.09.19 23:42 StoneColdCrazzzy TransitDiagrams

A community for all kinds of Transit Diagrams and Maps - a place to exchange and help with self-made Transit Maps and Diagrams.
[link]


2010.07.16 20:52 mlambir Luthier

[link]


2024.05.21 16:48 UMJaved What is the best way to create custom visualizations in Power BI?

Creating custom visualizations in Power BI involves several steps and methods, depending on the level of customization and the type of visual you want to create.
Here are some of the best approaches:

1. Using Custom Visuals from AppSource

Power BI AppSource is a marketplace where you can find a wide variety of pre-built custom visuals created by Microsoft and third-party developers. To use these visuals:
For Example "Sankey Diagram for Power BI by ChartExpo", "Comparison Bar Chart for Power BI by ChartExpo" , "Likert Scale Chart for Power BI by ChartExpo" and "Multi-Axis Line Chart for Power BI by ChartExpo"
  1. Go to the Visualizations pane in Power BI Desktop.
  2. Click on the three dots (…) and select “Get more visuals.”
  3. Browse or search for the visual you need.
  4. Add the visual to your report and configure it as required.

2. Creating Custom Visuals with R or Python

Power BI supports custom visuals created using R and Python, which are powerful for advanced analytics and custom visualizations.
  1. Enable R or Python scripting:
    • Go to File > Options and settings > Options > R scripting or Python scripting.
    • Set up the R or Python environment if you haven’t already.
  2. Add a visual:
    • Click on R or Python visual from the Visualizations pane.
    • Write your R or Python script in the script editor. The script should generate a plot using libraries such as ggplot2 for R or matplotlib for Python.
  3. Load data into the visual using the data fields, and Power BI will execute the script to render the visual.

3. Building Custom Visuals with Power BI Developer Tools

For highly customized visuals, you can use the Power BI Developer Tools to create visuals using JavaScript and TypeScript.
  1. Set up the development environment:
    • Install Node.js and npm.
    • Install the Power BI Visuals Tools: npm install -g powerbi-visuals-tools.
  2. Create a new visual project:
    • Use the command: pbiviz new to create a new custom visual project.
    • Navigate to the project directory: cd .
  3. Develop the visual:
    • Modify the source code in the /src directory. You can use D3.js or any other JavaScript library to create your visual.
  4. Test the visual:
    • Use pbiviz start to run a local server that allows you to test your visual in Power BI.
  5. Package and deploy the visual:
    • Use pbiviz package to create a .pbiviz file.
    • Import this file into Power BI by selecting Import from file under the Visualizations pane.

4. Using Power BI Themes and Custom Formatting

For custom styling and formatting, you can create a Power BI theme file (JSON) to define colors, fonts, and other visual styles.
  1. Create a JSON theme file with custom color palettes and formatting options.
  2. Import the theme into Power BI:
    • Go to Home > Switch Theme > Import Theme.
    • Select your JSON file to apply the custom theme to your report.

Best Practices for Creating Custom Visuals

submitted by UMJaved to bestchartsandgraphs [link] [comments]


2024.05.21 11:26 Mololama Smoke bomb prep and use for chases with additional properties

Potassium nitrate mix is one of the easiest ways, to make the chased influenced by a chemical that can be gaseous, in closed spaces.
Ever had a situation when you knew the chased was in the room, but you did not know where exactly? You know they are hiding in one of the corners, but which? If you search the wrong one, they will escape the room. You could wait outside, but you might not have the privilege to wait. So your best way is to make it harder for them to stay hidden or even just run. Druging them with gas is ideal.
Potassium nitrate is ideal for smoke bombs. An recepie is concluded down below in the post for a normal smoke bomb. Smoke bombs with potassium nitrate can be mixed with some compounds that are NOT from 16 or 1 group ( you will make an explosive possibly). Other than that , you might want to avoid group 2 as well. Other stuff should be safer. Use organic compounds and block d oxides for low hazard poisonous gasses. They should be alright. Examples of such stuff would benzene (aromatic groups are more hazardous for lungs than normal carbon chains) or CuO. Remember that elements with higher atomic number than Pb might stop cell mitosis and be harder to treat, so preferably don't use during practice anything higher than Pb. Such example is U which would be used in form of UO2. Smoke bombs should cover a good area, be visible ( so you can avoid the cloud) and stay in the air for long but short enough periods to clean up the room. later you can look for the chased if they don't go out or run. Don't lick the walls after such a smoke bomb. Remember to wait a minute or two after some clears., if you want to enter contaminated area. Wear safety googles and heat proof gloves when you make the product for safety.
Use responsibly.
So... If you want to be a Scrooge, you can make a fuse out of animal hair, wool or protein fibres. Do not use plant based stuff. It won't burn. You can also use bird guano to get potassium nitrate. But it takes some time, as it needs to be filtered out in hot water until you get crystals.
Recepie
Open your soup can, empty, and clean it out. Do not disregard the top. Mix together 3 parts potassium nitrate, and 2 parts granulated sugar. Bring your frying pan to a low heat and add your mixture from Step 2. Continue mixing with a plastic whisk until the powdered mixture fully liquefies. While waiting for the mixture to liquefy grab your soup can top, drill five circles an 1/8th of an inch in from the edge and one in the center. Pour the mixture into the soup can place the top of the soup can on top of the can. Insert your fuse through the center hole until it reaches the bottom, trim excess according to desired time to ignite. With the lid off allow 6-8 hours to fully harden. After the mixture has hardened put the can top back on threading the fuse through the center, and seal to the can with a 1/16th in bead of silicone.
submitted by Mololama to ChasingHumansTips [link] [comments]


2024.05.21 11:16 GM-official-tech Understanding Flowcharts in Computer Science: A Comprehensive Guide

In computer science, a flowchart is a diagram that represents a process, algorithm, or system. It uses a set of standardized symbols to illustrate the sequence of steps involved in the process, making it easier to understand and communicate complex procedures. Flowcharts are commonly used in software development, systems engineering, and business process modeling.

Key Symbols in Flowcharts

  1. Oval (Start/End):
    • Indicates the beginning and end points of a process.
    • Example: Start, End.
  2. Rectangle (Process):
    • Represents a step in the process, usually an action or operation.
    • Example: Calculate total, Update record.
  3. Parallelogram (Input/Output):
    • Denotes input to the system or output from the system.
    • Example: Enter data, Display results.
  4. Diamond (Decision):
    • Shows a decision point in the process, with branches for different outcomes.
    • Example: Is the number > 10? Yes/No.
  5. Arrow (Flow Line):
    • Indicates the direction of flow from one step to the next.
    • Example: Moves from "Start" to "Input Data".
  6. Circle (Connector):
    • Used to connect parts of the flowchart that are not directly connected.
    • Example: Connects to another part of the diagram labeled "A".

Example of a Simple Flowchart

Let's create a simple flowchart for a process that checks if a number is even or odd:
  1. Start
  2. Input Number (Parallelogram)
  3. Is Number % 2 == 0? (Diamond)
    • Yes: Output "Even" (Parallelogram)
    • No: Output "Odd" (Parallelogram)
  4. End

Steps to Create a Flowchart

  1. Identify the Process: Define the process you want to represent.
  2. Determine the Steps: Break down the process into individual steps.
  3. Select the Symbols: Choose appropriate symbols for each step.
  4. Arrange the Steps: Organize the steps in logical order.
  5. Draw the Flow Lines: Connect the symbols with arrows to show the flow.
  6. Review and Refine: Check for accuracy and completeness.

Applications of Flowcharts

Advantages of Flowcharts

Flowcharts are a versatile tool in computer science and beyond, aiding in the visualization and communication of processes and systems
submitted by GM-official-tech to u/GM-official-tech [link] [comments]


2024.05.21 08:20 Freddy_lang Implementing VLSI Circuits on FPGAs: A Step-by-Step Guide

Implementing VLSI circuits on FPGAs involves a series of structured steps. This guide provides a detailed overview of the entire process, from initial design specification to final hardware testing and optimization.

Step 1: Design Specification

Objective: Define the functional and performance requirements of the VLSI circuit.

Step 2: Hardware Description Language (HDL) Coding

Objective: Develop the circuit design using a Hardware Description Language (HDL), such as Verilog or VHDL.

Step 3: Simulation and Verification

Objective: Ensure the design works as intended before implementation.

Step 4: Synthesis

Objective: Convert the HDL code into a gate-level netlist compatible with the FPGA.

Step 5: Implementation

Objective: Map the netlist onto the FPGA's resources.

Step 6: Bitstream Generation

Objective: Generate the configuration file that programs the FPGA.

Step 7: FPGA Programming and Testing

Objective: Load the design onto the FPGA and verify its operation.

Step 8: Optimization and Refinement

Objective: Enhance the design for better performance, lower power consumption, or other improvements.

Tools and Software

Conclusion

Implementing a VLSI circuit on an FPGA involves translating high-level design specifications into hardware using HDL, verifying functionality through simulation, synthesizing the design into a gate-level netlist, mapping the design onto the FPGA's architecture, generating the bitstream, and finally programming and testing the FPGA. This structured process leverages various software tools to ensure the design meets performance, power, and area constraints.
submitted by Freddy_lang to vlsi_enthusiast [link] [comments]


2024.05.21 06:58 swagboi420blazeit Failed hotwire attempt on my STR...Need repair advice

Failed hotwire attempt on my STR...Need repair advice
As the title says, someone gave up midway through hotwiring my bike (2015 Street Triple R) before running off, and they've left me with a mess of cut wiring to deal with. Offhand I thought this could be an easy soldering job but I feel this is near impossible because of:
  1. how far back the sheaths have been stripped off some of these wires
  2. If I want to solder things back together, I need to cut off more of the black rubber sheathing at the top (attached to the ignition cylinder), and there is not much wire left up there to work with
  3. Extremely cramped quarters... there's no extra "play" on either side of the cut wires
This has made me consider whether it's possible to replace the entire cable altogether, and possibly just plug it into the same ignition cylinder (so I can retain my existing set of keys), but I can't figure out which specific part it would be from Triumph's crazy wiring diagram, and also from this video it seems like it's not possible.
To complicate matters further... the bike is parked 3 levels underground in a shared apartment garage. To give up and wheel it out to the street in order to get it towed to a shop will require rounding up 2-3 friends to help me push.
Overall the situation really sucks and has been pretty depressing. I've avoided fixing this for half a year because I've been dreading it. You guys are my last hope
https://preview.redd.it/izq4osimop1d1.jpg?width=1120&format=pjpg&auto=webp&s=3dc3a6a4c118477797f378d23096ee672708b8de
submitted by swagboi420blazeit to Triumph [link] [comments]


2024.05.21 04:37 whoami4546 How far are we from for these to scenarios to be "fixed"?

How far are we from for these to scenarios to be
I love image generation quite a bit. There are two scenarios where images generation falls flat.
Example 1
  1. Generating images with specific text or diagrams. For example, Multiplication tables on a chalk board in a ruined classroom. In most cases the following is generated.
Example: 2
  1. Generating images with specific criteria: For example: draw a picture of a desk with 3 wood pencils, 2 marbles, 1 game boy and 4 vases of flowers
submitted by whoami4546 to ChatGPT [link] [comments]


2024.05.21 04:15 jlo7693 FIX TRUCKS FASTER – Mitchell 1 TruckSeries

FIX TRUCKS FASTER – Mitchell 1 TruckSeries
🚛 Speed repairs for Class 4-8 trucks with the industry's BEST commercial truck repair information. GET STARTED NOW with a 14 day free trial of Mitchell 1 TruckSeries. No obligation. No credit card. No risk. It's 100% FREE!
🔗 https://www.m1repair.com/mitchell1truckseries
TruckSeries provides a one-stop information resource to help service professionals diagnose and repair all makes of medium and heavy trucks (class 4-8) with greater accuracy and efficiency. Eliminate the guesswork with accurate labor times and tools like interactive wiring diagrams, ADAS quick links, digital pictures, and so much more, delivered in seconds. Instantly pinpoint data like trouble code diagnostics, procedure information, and even TMC Recommended Practices. Don't get stuck in the slow lane – get trucks back on the road faster with TruckSeries!
FIX TRUCKS FASTER – Mitchell 1 TruckSeries
submitted by jlo7693 to prodemand [link] [comments]


2024.05.21 03:09 slightlybemusedsloth Peru with Belmond Review

Peru with Belmond Review
Went to Peru this April for a bucket list trip and as it is also on a lot of other people’s lists, I thought I’d share our experiences doing a full Belmond tour (hotels, private guides, museum/site/MP tickets, transportation). When researching for the trip, I had seen plenty of reviews on the individual properties but not much on their “journeys” so hopefully this is exhaustive but not too exhausting. Usually I plan my own trips and like to do a mix of properties rather than stay with one brand, but since we wanted to stay at the various Belmond offerings, it made sense to us to just do the package.
4 travelers (2 couples, all in our thirties)
Time frame: Eight days in April including international travel, booked in February (so short notice)
Day 1: Arrived in Lima late at night. Word of caution at the baggage claim - we knew we were meeting our Belmond rep and had been sent a diagram of where to meet him, which was a good thing, as there are people trawling the baggage claim that will say they are from the various hotels and try to take your luggage out for you (for a tip). They’re not officially associated with the hotel, so use their service at your own risk! Once we met our rep, we were promptly whisked away in a comfortable sprinter van complete with water and snacks, as would be the case for the rest of the trip, and our guide gave a good overview of the city on the way to the hotel, the Belmond Miraflores.
The hotel sits right on the water and is what I would call a classic “city hotel.” Beautiful flower arrangements in the lobby, where we were sat with welcome pisco sours for check in. Stayed in an Ocean View Junior Suite which was comfortable but nothing crazy memorable.
Day 2: Breakfast at the hotel rooftop restaurant. The small pool area is there as well. Great views over the coast. Food was a mix of a short a la carte menu and plenty of buffet options. Service was efficient and very friendly. Post breakfast, we were met in the lobby by our tour guide and driver for the day. Saw multiple sites including the Plaza Mayor, Archbishop’s Palace, the Santo Domingo Convent, and pre-Incan ruins. The best part was definitely the Larco museum. It’s excellently curated, the outdoor space is beautiful, and the exhibits are fascinating (and unique - erotic ceramics???). Appreciated having a guide to take us through the highlights, as sometimes it’s easy to get “museum-ed out” but I could have easily spent more time there. Hopped back to the hotel for a light late lunch. The restaurant downstairs has excellent ceviche. Spent a few hours relaxing and enjoying the view before Maido for dinner. The food is great, the wine pairing and intro of said wines was a bit perfunctory.
Day 3: Breakfast was again delicious and the waiters packed us to go parcels of coca/mint tea leaves for our trip to Cusco. Belmond took care of booking our flight on Latam and we were walked through right up to security. Once we landed and before we really felt the altitude, we were met by our driver and guide for the next few days and whisked away towards the Sacred Valley. Again, plenty of water and snacks on board, wifi, and coca candy for the altitude. Made a stop at Sulca Textiles, which is a small community collective of weavers with a museum of stunning wall weavings, a store with real baby alpaca items (not “maybe alpaca”), and a chance to see and feed the alpacas, llamas, and guacanos! Very memorable for sure and the best spot to load up on gifts. Stopped for a few more photo ops on the way to the Rio Sagrado. The Sacred Valley is filled with expansive, ever changing views and Hugo entertained and educated us on the long history and culture of the area.
The Rio Sagrado is a small, quiet sanctuary that is almost blink and miss the entrance right off the main road. Again we were greeted with a welcome drink and cool towels. The hotel is not big but there are some terraces and they will happily golf cart you around if you need (or in our case, our luggage). Stayed in a Garden Junior Suite. Room had a small balcony area with yoga mat available and while there was no tub, there was a large walk in shower. There is a small bar and quiet restaurant on site. Emphasis on quiet - it was the smallest of all the hotels on the trip, but the food quality was certainly up to par. They warm the beds at turndown with llama water bottles, a very cute touch.
Day 4: Breakfast here seems to alternate between a la carte plus buffet vs strictly a la carte. Hugo met us at our pre-discussed time and off we went to visit Ollantaytambo. There’s a colorful market there that is nice for photos and if you want classic souvenir trinkets but the site itself is the star. The streets there are narrow and crowded and our driver navigated them with ease. Hugo hiked with us to the very top and impressed up with his knowledge and insight. We’re also not stuffy people and he easily navigated both our interest in the culture and also our often bad jokes (with worse ones of his own 😂). For lunch we were treated to a local restaurant up in the mountains where we were the only ones there! I don’t think we would have otherwise found the place but it was a veritable feast that we got to enjoy with our now friends. Post lunch, more impressive tours of Maras and Moray. If you don’t get to go to Central in Lima, Virgilio’s other restaurant Mil is right next to Moray. Back to hotel for relaxing at the bar with drinks and cards and then early dinner…MP was waiting!
Day 5: Did I mention you get to feed the baby alpacas at breakfast? After this must do, we were off back to Ollantaytambo to the train station. If you’re not like us and book reasonably ahead of time, the Belmond Hiram Bingham stops right at the Rio Sagrado and picks you up from there. We took the Vistadome. As you would guess from the name, there’s plenty of windows that stretch above you to take in the Andean views. There’s an open observatory car at the end as well complete with live entertainment. The trip goes by quickly and Hugo came with us on the train. There are luggage restrictions so we left our big bags with our driver, who would bring them to Cusco for us. At the station in Aguas Calientes, the Santuary Lodge has people to take your bags ahead of you, and then you take the bus up to MP proper. Here Hugo worked his magic (he seemed to know people everywhere) and managed to get us on the bus before a huge wedding party. Yes it’s a public bus, but it’s perfectly comfortable and air conditioned. Arrived at the entrance to MP and wow, the Sanctuary Lodge really is RIGHT THERE. They take you to the garden to check in (welcome drinks, towels, the whole enchilada), and you marvel at where you are. The gardens are beautiful and absolutely filled with hummingbirds! Rooms weren’t quite ready so we had the buffet lunch at the hotel. Plenty of choices here. They came and found us at lunch to tell us our rooms were ready. Stayed in a Deluxe Terrace Room. The rooms are…not large and had a tiny bit of a damp smell (this is such a minor thing) but were well stocked (raincoats, souvenir water bottles, bug spray, lotions, massage oils, plenty of snacks and drinks - meals and minibasnacks included here).
Once we had time to freshen up, it was time to see Machu Picchu! Photos don’t do it justice and you will want a guide to get the most out of your experience. Hugo made the site come to life and this time of year, it did not feel crowded at all going later in the day. It also started drizzling when we were leaving, and it was perfect getting to duck right into the hotel, steps away. There’s nothing besides the hotel there so relax at the restaurant bar, have a spa visit, and get ready for dinner. It seemed most everyone there had changed out of hiking gear. Personally, dinner was well executed if the flavors were not my favorite. Take it with a grain of salt as they obviously have to bring everything up from the town.
Day 6: Woke up early to hike Huayna Picchu. The best views of MP were at this time. Hugo hiked “the stairs of death” with us (not nearly as bad as it sounds if you don’t have an extreme fear of heights) and played personal photographer. It’s a very worthwhile hike to get to see MP from a different angle. We got back right at check out time and the hotel was kind enough to let us change/shower in our own room rather than have to use their separate change/shower area. We did another circuit of MP after lunch and then just hung out with Hugo over drinks. The biggest perk of staying at Sanctuary Lodge is having multiple chances to see MP. While it’s beautiful on a gloomy day with the clouds suspended amongst the mountain peaks, it would be sad to travel all that way and never see it while it’s sunny. And weather changes quickly in the mountains!
Had a long bit of travel back through the Sacred Valley by train (if you were only to do the Hiram Bingham one way, it may be better to do it on the way back as it’s nighttime and you can’t enjoy the views), then picked up by car and off all the way back to Cusco.
Stayed at the Palacio Nazarenas in a Studio Suite and it was the best of all the Belmond properties! Right next door to the Belmond Monasterio. It has beautiful courtyard spaces everywhere you look and the rooms are the largest here. They pump oxygen in to help with the altitude. Large bathroom with soaking tub and separate spacious walk in shower. Studio suites have a sitting area inside and a small patio area outside overlooking a courtyard. Large bottles of rum and pisco are included. Got in super late so ordered room service which was delicious.
Day 7: Breakfast was combo buffet and a la carte. Fresh juices and plenty of local produce. The restaurant Mauka overlooks the pristine royal blue pool and it’s a picture perfect setting. Lots of touring around Cusco this day, seeing the main square and cathedral, multiple important sites like Sacsayhuaman, and Quenqo. Hugo really shined here - besides helping us understand the significance of the sites, he knew we were sad about not seeing a vicunya so we did an impromptu stop at another weaving center to see two of the few non-wild vicunyas. He also had arranged “a farewell surprise” for us and one of our party hadn’t been feeling well that day. Hugo checked on him all during our tours and arranged for our driver to pick him up so we could all share one last farewell drink. The Palacio is a gem and I would happily spend many more days here! When we got back to the hotel post shopping and tearful farewells (we actually still keep in touch), we had a personal patio side pisco sour making class with one of the fantastic butlers and enjoyed one of my favorite meals of the trip at Mauka. Pricey, but very very good.
Day 8: Off to Lima again, where we had a long layover, the same Belmond rep who met us initially helped settle us in for the wait before the long trip home!
Belmond Bellini perks (through a TA, they don’t have a personal reward program): Usually $100 hotel credit everywhere we stayed, potential for room upgrades, breakfast every day, welcome note/chocolate. Also a $500 voucher to use for another Belmond trip
Will be looking to do a trip back to Peru at some point to see the Nazca Lines and Lake Titicaca and will not hesitate to use Belmond again, especially to get a few extra days of R&R at the Palacio.
TLDR: If you’re going to Peru for the first time and want to do it chubby luxe, the Belmonds certainly fit the bill and the package deal is worth it for the convenience and the quality of the guides. You won’t have to worry about a thing.
If you’ve read this far, hope this helps and happy travels!
submitted by slightlybemusedsloth to chubbytravel [link] [comments]


2024.05.21 02:50 mcthieferson Help wiring Venstar T7900

Help wiring Venstar T7900
I currently have a Trane thermostat (manual) with the following wiring
existing wiring
I am wanting to install a Venstar T7900 (manual), which has the following terminals:
Venstar T7900 terminals
I've read through much of each manual, including the sample wiring diagrams. The T7900 has the following info on wiring:
wire connections
It also has some dip switches which have to be set for:
  1. gas,elec / heat pump => I've set it to heat pump
  2. O / B wire for controlling the switchover valve => I've set it to O
Most of the wiring seems straight forward enough, but the manual states this about the O / B dip switch:
When the GAS/EL or HP dip switch is configured for HP, this dip switch (O or B) must be set to control the appropriate reversing valve. If O is chosen, the W1/O/B terminal will energize in cooling. If B is chosen, the W1/O/B terminal will energize in heating.
Given that W1/O/B will be energized in cooling, I'm not sure about wiring both Orange and White into the W1/O/B terminal together. Below is my attempt at mapping the old terminals to the new terminals. Would appreciate some help. Thanks.
Trane Terminal T7900 Terminal
B C (because Trane uses B for common)
O W1/O/B
G G
Y Y1
W1 ?
W2/X2 ?
R R
submitted by mcthieferson to thermostats [link] [comments]


2024.05.21 02:11 Hel-Low5302 Divider Generator gives unknown output

I'm a beginner in FPGA programming and I'm trying to implement a noise filter in Verilog on Vivado. I'm doing calculations on the input signal where division is needed, so I'm using the Divider Generator IP to implement the division outside of a module. When I run behavioral simulation, I get an unknown output x from the Divider Generator, even though the input data is valid. I'm guessing that I'm implementing the division wrong with the IP, but does anyone have any idea what I might be doing wrong? I hardly did any coding about the Division Generator and most I did was connecting the input an output ports.
Here is my code (here is also the full version of the code):
block diagram of the main module and the Divider Generator
 reg [15:0] y, y_init, u; reg [15:0] x_curr, x_curr0, x_curr1, x_curr2, x_next, x_next2, x_next3; reg [15:0] e_pre_var, e_next_var, e_next_var1, e_next_var2; reg [15:0] K_pre, K_next, one_K_next_sq, K_next_sq; // kalman gain always@(posedge clk) begin // distribute calculations over multiple clock cycles to meet timing // all variables that are not declared as reg are parameters if (counter == 9) begin y_init = y_measured; end if (counter == 10) begin x_next2 = k_omega*y_init; x_next3 = oT*x_next2; measured = x_next3; end if (counter == 11) begin x_next = x_next1 - x_next3; // predict the next state x_n e_pre_var = e_pre_var0; end // counter1024 counts up to 1024 clock cycles, so the following makes calculations every 1024 //cycles // this reduces the calculation rate from 125 MHz if (counter_eq_1024) begin y = y_measured; u = -oT*k_omega*y; end if (counter1024 == 1) begin K_next_num = (phi_sq*e_pre_var + d_var); K_next_denom = (phi_sq*e_pre_var + d_var + s_var); // K_next = K_next_num / K_next_denom; M_AXIS_OUT_tvalid_kal = 1'b1; // send the numerator and denominator values to the Divider Generator K_next = K_next_in; end if (counter1024 == 2) begin x_curr0 = 1-K_next; x_curr1 = x_curr0*x_next; x_curr2 = K_next*y; end if (counter1024 == 3) begin x_curr = x_curr1 + x_curr2; // estimate the current state x_next = phi*x_curr + u; // predict the next state end if (counter1024 == 4) begin one_K_next_sq = (1-K_next)*(1-K_next); K_next_sq = K_next*K_next; end if(counter1024 == 5) begin e_next_var1 = one_K_next_sq*K_next_num; e_next_var2 = K_next_sq*s_var; e_next_var = e_next_var1 + e_next_var2; end if (counter1024 == 6) begin // make the next values the old values for next cycle K_pre = K_next; e_pre_var = e_next_var; // Assign the calculated value to the output signal M_AXIS_OUT_tdata = x_next; // Store sampled data in memory x_data = x_next; M_AXIS_OUT_tvalid_kal = 1'b0; // stop data flow to Divider Generator end end 
submitted by Hel-Low5302 to FPGA [link] [comments]


2024.05.21 02:05 MearihCoepa Help with setup

Help with setup
Since the cables and wires are a mess behind the TV I made this diagram.
The crux of the issue is that I'm not happy swapping the AV cables manually from the NES/SNES on the Retrotink each time I want a new system. The HDMI hub is digital push button and is no problem but the Retrotink2x is eventually going to wear out as well as cable management (yes I'm one of those people) and lack of space since the wife wanted a TV that's too big for the corner it's in. Is there an upscaledigital TV converter box that can handle more than 1 system at a time like the old school AV hubs for CRTs or do I just need to buy a retrotink for each system (which would need another HDMI cable and a USB hub for power and a new HDMI switcher for the additional input)? The TV (Samsung BU8500 50) only has 2 HDMI ports and one is needed for the sound (I think I pushed the optical cable in wrong and busted the port so sound needs to go in the HDMI/ARC port) and we move too much to lug a CRT around (plus I have one in storage for when we're finally home home).
How do you all handle multiple retro systems on a modern TV?
submitted by MearihCoepa to retrogaming [link] [comments]


2024.05.21 02:00 16-9 RATGDO Rolling Codes Synchronization

I need your help. I've perused extensively this Reddit as well as whatever I found on Github and on Paul's site. Yet, struggling.
I have 2 RATGDO v2.52i associated with 2 Liftmaster 8550 (Security + 2.0, battery backup). Both of them are configured as ESPHome devices and were adopted by the ESPHome dashboard. One of them figured out the rolling code counter pretty quickly, the other one is still going at it.
A few more things, you need to know.
  1. while my wifi is strong in the garage, I was never able to connect until I inserted the following lines in my configuration:

 on_boot: priority: 300 then: lambda: - WiFi.setPhyMode(WIFI_PHY_MODE_11G); 
  1. I setup a DNS lease for the device, yet I chose eventually to manually assign the IP address in the config as well to prevent eventual DHCP timeouts.
     manual_ip: # Set this to 34 for Door 2 or 33 for Door 1 static_ip: gateway: subnet: 10.0.0.3310.0.0.1255.255.255.0 
  2. I followed Paul's sequence with no success:
4) I followed an alternate sequence suggested here on Reddit with the same outcome
5) It goes without saying that I have reflashed countless times, waited overnight, tried to manually sync. At this point, I am at a loss.
I have a few questions some of you might have answers for.
  1. If the opener has a battery backup, is it sufficient to unplug and replug the opener? Shall I also remove the battery?
  2. How long does the opener learning sequence last? At which point should I unplug the opener again?
  3. My understanding is that learning a wall-mounted button (and the RATGDO) is different than learning a remote. The Learn button is for the remotes only and is not needed by the RATGDO. Am I correct?
  4. Should I ever use the Sync button found on the RATGDO webpage? When am I supposed to activate it? At the onset or whenever I lose a prior synchronization? The web UI documentation states: "Manually sync the ratgdo client with the GDO. If the GDO isn't responding to commands from ratgdo, a sync should force them to be on the same rolling code counter."
  5. At which point, shall I call the board defective?
Ready for all the suggestions and questions you might have.
submitted by 16-9 to homeassistant [link] [comments]


2024.05.20 23:18 beautifulboy11 Instant Comment Loading on Android & iOS

Instant Comment Loading on Android & iOS
Written by Ranit Saha (u/rThisIsTheWay) and Kelly Hutchison (u/MoarKelBell)
Reddit has always been the best place to foster deep conversations about any topic on the planet. In the second half of 2023, we embarked on a journey to enable our iOS and Android users to jump into conversations on Reddit more easily and more quickly! Our overall plan to achieve this goal included:
  1. Modernizing our Feeds UI and re-imagining the user’s experience of navigating to the comments of a post from the feeds
  2. Significantly improve the way we fetch comments such that from a user’s perspective, conversation threads (comments) for any given post appear instantly, as soon as they tap on the post in the feed.
This blog post specifically delves into the second point above and the engineering journey to make comments load instantly.

Observability and defining success criteria

The first step was to monitor our existing server-side latency and client-side latency metrics and find opportunities to improve our overall understanding of latency from a UX perspective. The user’s journey to view comments needed to be tracked from the client code, given the iOS and Android clients perform a number of steps outside of just backend calls:
  1. UI transition and navigation to the comments page when a user taps on a post in their feed
  2. Trigger the backend request to fetch comments after landing on the comments page
  3. Receive and parse the response, ingest and keep track of pagination as well as other metadata, and finally render the comments in the UI.
We defined a timer that starts when a user taps on any post in their Reddit feed, and stops when the first comment is rendered on screen. We call this the “comments time to interact” (TTI) metric. With this new raw timing data, we ran a data analysis to compute the p90 (90th percentile) TTI for each user and then averaged these values to get a daily chart by platform. We ended up with our baseline as ~2.3s for iOS and ~2.6s for Android:
https://preview.redd.it/f31z8rv5dn1d1.png?width=1026&format=png&auto=webp&s=cfeb6262741ad04c7aedfdd964dcb506d0abdcba
https://preview.redd.it/qmux6656dn1d1.png?width=1012&format=png&auto=webp&s=d3a8358caf461f84890fc176ed6e617d652fd28e

Comment tree construction 101

The API for requesting a comment tree allows clients to specify max count and max depth parameters. Max count limits the total number of comments in the tree, while max depth limits how deeply nested a child comment can be in order to be part of the returned tree. We limit the nesting build depth to 10 to limit the computational cost and make it easier to render from a mobile platform UX perspective. Nested children beyond 10 depth are displayed as a separate smaller tree when a user taps on the “More replies” button.
The raw comment tree data for a given ‘sort’ value (i.e., Best sort, New sort) has scores associated with each comment. We maintain a heap of comments by their scores and start building the comments ’tree’ by selecting the comment at the top (which has the highest score) and adding all of its children (if any) back into the heap, as candidates. We continue popping from the heap as long as the requested count threshold is not reached.
Pseudo Code Flow:
  • Fetch raw comment tree with scores
  • Select all parent (root) comments and push them into a heap (sorted by their score)
  • Loop the requested count of comments
    • Read from the heap and add comment to the final tree under their respective parent (if it's not a root)
    • If the comment fetched from the heap has children, add those children back into the heap.
    • If a comment fetched from the heap is of depth > requested_depth (or 10, whichever is greater), and wrap them under the “More replies” cursor (for that parent).
  • Loop through remaining comments in the heap, if any
    • Read from the heap and group them by their parent comments and create respective “load more” cursors
    • Add these “load more” cursors to the final tree
  • Return the final tree
Example:
A post has 4 comments: ‘A’, ‘a’, ‘B’, ‘b’ (‘a’ is the child of ‘A’, ‘b’ of ‘B’). Their respective scores are: { A=100, B=90, b=80, a=70 }.If we want to generate a tree to display 4 comments, the insertion order is [A, B, b, a].
We build the tree by:
  • First consider candidates [A, B] because they're top level
  • Insert ‘A’ because it has the highest score, add ‘a’ as a candidate into the heap
  • Insert ‘B’ because it has the highest score, add ‘b’ as a candidate into the heap
  • Insert ‘b’ because it has the highest score
  • Insert ‘a’ because it has the highest score
Scenario A: max_comments_count = 4
Because we nest child comments under their parents the displayed tree would be:
A
-a
B
-b
Scenario b: max_comments_count = 3
If we were working with a max_count parameter of ‘3’, then comment ‘b’ would not be added to the final tree and instead would still be left as a candidate when we get to the end of the ranking algorithm. In the place of ‘b’, we would insert a ‘load_more’ cursor like this:
A
-a
B
  • load_more(children of B)
With this method of constructing trees, we can easily ‘pre-compute’ trees (made up of just comment-ids) of different sizes and store them in caches. To ensure a cache hit, the client apps request comment trees with the same max count and max depth parameters as the pre-computed trees in the cache, so we avoid having to dynamically build a tree on demand. The pre-computed trees can also be asynchronously re-built on user action events (like new comments, sticky comments and voting), such that the cached versions are not stale. The tradeoff here is the frequency of rebuilds can get out of control on popular posts, where voting events can spike in frequency. We use sampling and cooldown period algorithms to control the number of rebuilds.
Now let's take a look into the high-level backend architecture that is responsible for building, serving and caching comment trees:
  • Our comments service has Kafka consumers using various engagement signals (i.e., upvote, downvotes, timestamp, etc…) to asynchronously build ‘trees’ of comment-ids based on the different sort options. They also store the raw complete tree (with all comments) to facilitate a new tree build on demand, if required.
  • When a comment tree for a post is requested for one of the predefined tree sizes, we simply look up the tree from the cache, hydrate it with actual comments and return back the result. If the request is outside the predefined size list, a new tree is constructed dynamically based on the given count and depth.
  • The GraphQL layer is our aggregation layer responsible for resolving all other metadata and returning the results to the clients.
  • Comment tree construction 101
https://preview.redd.it/joh96u6cdn1d1.png?width=1006&format=png&auto=webp&s=ab30f9a8b4f69dea2feea9355c0e49cdaf57b15e

Client Optimizations

Now that we have described how comment trees are built, hopefully it’s clear that the resultant comment tree output depends completely on the requested max comment count and depth parameters.
Splitting Comments query
In a system free of tradeoffs, we would serve full comment trees with all child comments expanded. Realistically though, doing that would come at the cost of a larger latency to build and serve that tree. In order to balance this tradeoff and show user’s comments as soon as possible, the clients make two requests to build the comment tree UI:
  • First request with a requested max comment count=8 and depth=10
  • Second request with a requested max comment count=200 and depth=10
The 8 comments returned from the first call can be shown to the user as soon as they are available. Once the second request for 200 comments finishes (note: these 200 comments include the 8 comments already fetched), the clients merge the two trees and update the UI with as little visual disruption as possible. This way, users can start reading the top 8 comments while the rest load asynchronously.
Even with an initial smaller 8-count comment fetch request, the average TTI latency was still >1000ms due to time taken by the transition animation for navigating to the post from the feed, plus comment UI rendering time. The team brainstormed ways to reduce the comments TTI even further and came up with the following approaches:
  • Faster screen transition: Make the feed transition animation faster.
  • Prefetching comments: Move the lower-latency 8-count comment tree request up the call stack, such that we can prefetch comments for a given post while the user is browsing their feed (Home, Popular, Subreddit). This way when they click on the post, we already have the first 8 comments ready to display and we just need to do the latter 200-count comment tree fetch. In order to avoid prefetching for every post (and overloading the backend services), we could introduce a delay timer that would only prefetch comments if the post was on screen for a few seconds.
  • Reducing response size: Optimize the amount of information requested in the smaller 8-count fetch. We identified that we definitely need the comment data, vote counts and moderation details, but wondered if we really need the post/author flair and awards data right away. We explored the idea of waiting to request these supplementary metadata until later in the larger 200-count fetch.
Here's a basic flow of the diagram:
https://preview.redd.it/pkzuaavsdn1d1.png?width=1068&format=png&auto=webp&s=d6e43e555f4f8892d7b0d9f9bed233553b576cef
This ensures that Redditors get to see and interact with the initial set of comments as soon as the cached 8-count comment tree is rendered on screen. While we observed a significant reduction in the comment TTI, it comes with a couple of drawbacks:
  • Increased Server Load - We increased the backend load significantly. Even a few seconds of delay to prefetch comments on feed yielded an average increase of 40k req/s in total (combining both iOS/Android platforms). This will increase proportionally with our user growth.
  • Visual flickering while merging comments - The largest tradeoff though is that now we have to consolidate the result of the first 8-count call with the second 200-count call once both of them complete. We learned that comment trees with different counts will be built with a different number of expanded child comments. So when the 200-count fetch completes, the user will suddenly see a bunch of child comments expanding automatically. This leads to a jarring UX, and to prevent this, we made changes to ensure the number of uncollapsed child comments are the same for both the 8-count fetch and 200-count fetch.

Backend Optimizations

While comment prefetching and the other described optimizations were being implemented in the iOS and Android apps, the backend team in parallel took a hard look at the backend architecture. A few changes were made to improve performance and reduce latency, helping us achieve our overall goals of getting the comments viewing TTI to < 1000ms:
  • Migrated to gRPC from Thrift (read our previous blog post on this).
  • Made sure that the max comment count and depth parameters sent by the clients were added to the ‘static predefined list’ from which comment trees are precomputed and cached.
  • Optimized the hydration of comment trees by moving them into the comments-go svc layer from the graphQL layer. The comments-go svc is a smaller golang microservice with better efficiency in parallelizing tasks like hydration of data structures compared to our older python based monolith.
  • Implemented a new ‘pruning’ logic that will support the ‘merge’ of the 8-count and 200-count comment trees without any UX changes.
  • Optimized the backend cache expiry for pre-computed comment trees based on the post age, such that we maximize our pre-computed trees cache hit rate as much as possible.
The current architecture and a flexible prefetch strategy of a smaller comment tree also sets us up nicely to test a variety of latency-heavy features (like intelligent translations and sorting algorithms) without proportionally affecting the TTI latency.

Outcomes

So what does the end result look like now that we have released our UX modernization and ultra-fast comment loading changes?
  • Global average p90 TTI latency improved by 60.91% for iOS, 59.4% for Android
  • ~30% reduction in failure rate when loading the post detail page from feeds
  • ~10% reduction in failure rates on Android comment loads
  • ~4% increase in comments viewed and other comment related engagements
We continue to collect metrics on all relevant signals and monitor them to tweak/improve the collective comment viewing experience. So far, we can confidently say that Redditors are enjoying faster access to comments and enjoying diving into fierce debates and reddit-y discussions!
If optimizing mobile clients sounds exciting, check out our open positions on Reddit’s career site.
submitted by beautifulboy11 to RedditEng [link] [comments]


2024.05.20 23:13 justjools22 DAC circuit to opamp output help

DAC circuit to opamp output help
Hi, I have been studying this circuit to understand it, so that I can learn from it but also be able to develop it. For context this circuit is the Ardcore digital synth using Arduino. There is no schematic so I am working from a PCB diagram.
The circuit uses Arduino digital outs D2 - D12 going into the DAC TLC7524C to a TL072 opamp. The digital outs D2 - D6 go to one input channel on the DAC, whereas D7 -D12 have an individual corresponding channel each: DB7 - DB3. The analogue output after the DAC conversion and opamp is used for audio signals like an oscillator waveform or a V/OCT control voltage depending on the code assigned to it. So I'm guessing the organisation of these two digital groups are for this.
And this is my first question - why do some digital inputs have individual channels and others grouped into one?
My second questions are more techical. I understand how a opamp buffer circuit works but not sure what the functionality of some of the Resistor Feedback DAC is here and how these outputs work with the opamp.
  1. RFB - resistor feedback:
"The RFB R feedback DAC uses a repetitive arrangement of precise resistor networks in a ladder-like configuration to convert the digital input signal into an analog output voltage."
The RFB pin on the DAC is connected to the 1OUT then resistor > Trimpot across signal 2OUT on opamp. Is the the trimpot regulating the voltage here? What is the RFB doing supplying this signal?
  1. WR pulse duration and CS hold time
I found this in the datasheet but could you explain it in simpler terms? So the pins are grounded in the circuit - they are both low and 'data directly affects the analog output'.
The DAC on these devices interfaces to a microprocessor through the data bus and the CS and WR control signals. When CS and WR are both low, analog output on these devices responds to the data activity on the DB0−DB7 data bus inputs. In this mode, the input latches are transparent and input data directly affects the analog output. When either the CS signal or WR signal goes high, the data on the DB0−DB7 inputs are latched until the CS and WR signals go low again. When CS is high, the data inputs are disabled regardless of the state of the WR signal.
  1. VDD and REF
Are these connected to maintain a constant voltage reference?
Here are my visual workings thinking through the circuit. Many thanks for your help.
NB: Update: the illustration below for D6 - D2 is wrong and hence incorrect first question. - please see revised illustration at end of post.
https://preview.redd.it/85b4wbo0en1d1.png?width=2013&format=png&auto=webp&s=c1bb04828d16ce3b76efea51a9638dbcae378ae7
Original PCB diagram for reference:
https://preview.redd.it/equhxh1xcn1d1.png?width=1000&format=png&auto=webp&s=d3bcaaa42f39ddac044c26f1c32463bf51dda5b0
submitted by justjools22 to synthdiy [link] [comments]


2024.05.20 23:08 AszneeHitMe Any help?

Any help? submitted by AszneeHitMe to alevel [link] [comments]


2024.05.20 21:56 Unique-Switch-397 2017 Encore help!!!

I have a 2017 encore 1.4l turbo that’s been giving me some issues. Long story short I bought this car to fix and resell and I’ve replaced A LOT of engine components. After getting everything back together and working all was well. Some time ago the car started having an issue where it would shake under heavier acceleration. I originally thought it was the transmission but now I’m not so sure.
The other day it threw a misfire cylinder 2 code (pending) and a “service stabilitrack” code. Both went away after a few seconds of flashing at me. It started at just higher speeds (50+) but is now doing it randomly at any speed. The car doesn’t have a sensor for boost pressure but the MAP had been consistently what it should be regardless. Now I’m skeptical of the turbo because the car feels like it’s not making the right amount of boost and the MAP when the issue is present confirms that. Along with that I can hear a “whine” pretty consistently when the turbo is spinning up, which was not the case before.
I used a borescope to check the blowoff valve and everything inside looks correct no cracks or anything. I’ve done some adjusting to the blowoff valve and had some luck but it eventually went back to the same performance. I was doing some looking around at the diagrams in the tech manuals and noticed some oddities with the way the vacuum hoses are supposed to be connected so I’m not entirely sure it’s even hooked up right at this point. The diagrams show something different than what is was before being taken apart. Also the two actuators are depicted differently so I don’t have any way of comparing.
I’m mainly concerned with an underboost issue but I can’t figure out what would be causing this. Has anyone had a similar issue and if so what fixed it? TYIA!
submitted by Unique-Switch-397 to BuickEncore [link] [comments]


2024.05.20 19:49 danid8571 Help me read this pattern?

Help me read this pattern?
I’m trying to make this Appa amigarumi from billie_bosha on Instagram. For some of the pieces, she included a visual pattern rather than a written pattern and I’m struggling to understand it. In the second attached photo, the top pattern is to add the fringe around Appa’s head - the head is already made and I’ll be working front loop only. Got it. But what does the visual represent? Looking at symbol charts, it looks like the ovals represent chain stitches? Are the lower case t looking shapes single crochets? Any help you can give me and my dumb brain would help!
submitted by danid8571 to CrochetHelp [link] [comments]


2024.05.20 19:46 RantNRave31 The Pipe of Time: A Unified Framework for Temporal Dynamics in Cognitive Systems and Social Networks

The Pipe of Time: A Unified Framework for Temporal Dynamics in Cognitive Systems and Social Networks
Abstract
This paper introduces the "Pipe of Time," a novel conceptual framework designed to elucidate the temporal dynamics underlying cognitive systems and social networks. The "Pipe of Time" integrates principles from thermodynamics, information theory, fluid dynamics, and cognitive science to provide a comprehensive model for understanding how temporal processes influence the flow of information, energy, and entropy within intelligent systems and social structures. This framework examines the impact of information quality and density, contrasting low-density, low-efficiency systems with high-density, high-efficiency systems. By conceptualizing time as a continuous flow and incorporating concepts such as cavitation, pressure calculations, and the dynamic membership of social networks, the "Pipe of Time" offers insights into the evolution of cognitive states, the emergence of intelligence, and the nature of social interactions and decision-making.
I. Introduction
The nature of time and its role in cognitive processes and social networks has long been a subject of interest across multiple disciplines. Understanding how cognitive systems and social networks navigate and interpret the temporal dimension is crucial for advancing theories of intelligence, consciousness, and social dynamics. This paper introduces the "Pipe of Time," a conceptual framework that integrates temporal dynamics with cognitive processes and social networks, providing a unified model for examining the evolution of states within intelligent systems and social groups.
II. Theoretical Foundations
The "Pipe of Time" framework is built upon four key theoretical foundations: thermodynamics, information theory, fluid dynamics, and cognitive science. These foundations provide the conceptual and mathematical basis for understanding how temporal dynamics influence cognitive systems and social networks.
A. Thermodynamics
Thermodynamics provides a framework for understanding the flow of energy and entropy within a system. Key principles include:
  1. Energy Utilization: Cognitive systems and social networks require energy to function and maintain order.
  2. Entropy Reduction: Intelligent behavior and cohesive social dynamics are associated with reducing internal entropy, increasing complexity and organization.
B. Information Theory
Information theory offers tools for quantifying information flow and processing within cognitive systems and social networks. Key principles include:
  1. Information Flow: Cognitive systems and social networks process and transmit information, which is subject to noise and distortion.
  2. Entropy and Information: The entropy of a system can be related to the amount of uncertainty or information within that system.
  3. Information Density: The concentration of useful information within a given volume of data, influencing the efficiency of cognitive and social processes.
C. Fluid Dynamics
Fluid dynamics provides a useful analogy for understanding the continuous flow of time and the dynamic processes within cognitive systems and social networks. Key principles include:
  1. Flow and Pressure: Just as fluids flow through pipes, information and energy flow through cognitive systems and social networks, driven by gradients in pressure and potential.
  2. Cavitation and Pressure Dynamics: Sudden changes in pressure, analogous to cavitation in fluids, can represent abrupt shifts in cognitive states or social dynamics.
D. Cognitive Science and Social Network Theory
Cognitive science and social network theory provide insights into the mechanisms underlying perception, learning, decision-making, and social interactions. Key principles include:
  1. Perception and Action: Cognitive systems perceive their environment and take actions to achieve specific goals. Social networks exhibit collective behavior and decision-making.
  2. Prediction and Learning: Systems learn from past experiences to improve future predictions and actions. Social networks evolve based on shared information and collective experiences.
  3. Flocking Behavior: Social networks exhibit flocking behavior, where groups align their actions and decisions, creating probability clusters and influencing group dynamics.
  4. Neural Network Analogy: Each social network can be considered a neural network, where group decision-making and entropy reduction parallel cognitive processes.
III. The Pipe of Time Conceptual Framework
The "Pipe of Time" conceptualizes time as a continuous flow through which cognitive systems and social networks navigate. This framework integrates thermodynamic principles with information processing and fluid dynamics to model the evolution of cognitive and social states over time.
A. Temporal Flow
  1. Continuous Time: Time is represented as a continuous flow, with cognitive and social states evolving smoothly over time.
  2. Temporal Prediction: Cognitive systems and social networks generate predictions about future states based on past and present information.
B. Energy and Entropy Dynamics
  1. Energy Flow: Cognitive systems and social networks consume and allocate energy to maintain order and perform tasks.
  2. Entropy Management: Systems actively manage entropy to reduce uncertainty and increase efficiency.
C. Information Processing
  1. Information Flow: Cognitive systems and social networks process information continuously, adjusting their internal states based on new data.
  2. Error Correction: Systems use feedback mechanisms to correct errors and improve future predictions.
D. Information Density and Quality
  1. Information Density: High-density information systems and social networks concentrate more useful data within a given volume, leading to more efficient processing.
  2. Quality of Information: High-quality information reduces contradictions and improves system performance. Contradictions cause token and rule bloat, leading to inefficiencies.
IV. Cavitation and Pressure Dynamics
Sudden changes in pressure, or cavitation events, are modeled by:
[ \Delta P = \mu \left( \frac{\partial2 \mathbf{X}(t)}{\partial t2} \right) ]
where ( \mu ) is a constant representing the susceptibility of the system to rapid changes in pressure, leading to potential discontinuities or abrupt shifts in state.
V. Information Density and System Efficiency
This section explores the impact of information density on system efficiency, contrasting low-density, low-efficiency systems with high-density, high-efficiency systems.
A. Low-Density, Low-Efficiency Systems
  1. Contradictions and Bloat: Low-density systems often store redundant or contradictory information, leading to token and rule bloat.
  2. Energy Waste: Such systems require more energy to process and resolve contradictions, resulting in lower efficiency.
  3. Poor Performance: The increased computational load reduces the system's overall performance and responsiveness.
B. High-Density, High-Efficiency Systems
  1. Optimized Information Storage: High-density systems store information in a more compact and optimized manner, reducing redundancies.
  2. Enhanced Efficiency: By minimizing contradictions and bloat, these systems achieve higher efficiency and faster processing speeds.
  3. Smaller Hardware Requirements: High-density information systems can perform well even in smaller hardware environments, making them suitable for resource-constrained applications.
C. Role of Reduced Ordered Binary Decision Diagrams (ROBDDs)
  1. Value Inference and Prediction: In high-density systems, values inferred from actions are used to predict future actions. ROBDDs provide an efficient method for representing and manipulating Boolean functions, aiding in value inference and decision-making.
  2. Comparative Analysis: High-density systems using ROBDDs can be compared to low-density systems to predict and quantify information waste and inefficiencies.
VI. Social Networks as Neural Networks
Each slice of the "Pipe of Time" represents a distinct social network, with each group exhibiting flocking behavior and forming probability clusters. These social networks function as neural networks, with group decision-making processes and entropy management analogous to cognitive processes. Additionally, the dynamic nature of social networks includes the joining and leaving of members, analogous to the dynamic synaptic connections in a neural network.
A. Flocking Behavior and Probability Clusters
  1. Group Dynamics: Social networks exhibit flocking behavior, where individuals align their actions and decisions based on shared information and group influence.
  2. Probability Clusters: Flocking behavior leads to the formation of probability clusters, where certain outcomes or behaviors become more likely due to group dynamics.
B. Social Networks as Neural Networks
  1. Analogous Processes: Social networks can be modeled as neural networks, with nodes representing individuals and connections representing social interactions and information flow.
  2. Group Decision-Making: Collective decision-making processes in social networks parallel neural network processing, where the network evolves based on shared information and collective experiences.
  3. Entropy Management: Social networks, like cognitive systems, actively manage entropy to reduce uncertainty and increase efficiency.
C. Dynamic Membership
  1. Member Lifespan: The membership of a social network is dynamic, with individuals joining and leaving over time. This can be modeled as a time-dependent variable affecting the structure and function of the network.
  2. Venn Diagram Representation: Each slice of the "Pipe of Time" can be represented as a Venn diagram, illustrating the overlapping memberships and the temporal dynamics of group composition.
VII. Mathematical Formalism
The "Pipe of Time" framework is formalized using mathematical equations that describe the flow of energy, entropy, and information within cognitive systems and social networks over time, incorporating concepts from fluid dynamics and dynamic membership.
A. State Evolution
The state of a cognitive system or social network at any time ( t ) is represented by a vector ( \mathbf{X}(t) ), which evolves according to the following differential equation:
[ \frac{d\mathbf{X}(t)}{dt} = \mathbf{F}(\mathbf{X}(t), t) ]
where ( \mathbf{F} ) is a function that captures the dynamics of the system, including energy utilization, entropy reduction, and information processing.
B. Energy and Entropy Equations
The energy ( E ) and entropy ( S ) of the system are governed by the following equations:
[ \frac{dE(t)}{dt} = -\alpha E(t) + \beta I(t) ] [ \frac{dS(t)}{dt} = \gamma S(t) - \delta I(t)
]
where ( \alpha ), ( \beta ), ( \gamma ), and ( \delta ) are constants, and ( I(t) ) represents the information processed by the system at time ( t ).
C. Information Flow and Pressure
The flow of information ( I(t) ) within the system is described by:
[ I(t) = \eta \left( \frac{d\mathbf{X}(t)}{dt} \right) ]
where ( \eta ) is a constant that relates the rate of change of the system's state to the amount of information processed.
D. Dynamic Membership
  1. Membership Evolution: The membership of a social network evolves over time, affecting the overall structure and function of the network. This can be represented by a time-dependent membership function ( M(t) ), which influences the state vector ( \mathbf{X}(t) ).
  2. Venn Diagram Dynamics: The dynamic membership can be visualized using Venn diagrams, where each slice of the "Pipe of Time" represents a different temporal state of the network, showing the overlap and interactions of different subgroups.
VIII. Conclusion
The "Pipe of Time" framework provides a comprehensive model for understanding the temporal dynamics of cognitive systems and social networks. By integrating principles from thermodynamics, information theory, fluid dynamics, and cognitive science, this framework offers a unified approach to studying the evolution of states within intelligent systems and social structures. The incorporation of dynamic membership and the comparison of information density and quality further enhance our understanding of system efficiency and performance. This model has potential applications in various fields, including artificial intelligence, sociology, and organizational theory, offering new insights into the nature of time, intelligence, and social dynamics.
submitted by RantNRave31 to ASK_A_CRACKPOT [link] [comments]


2024.05.20 18:49 bekahthesixth Shakespeare [discussion]

I was rereading The Unwanted Guest this morning to make some diagrams of the coffin movements because I’m sure they mean Something (didn’t figure anything out, if you were wondering) but this snippet really jumped out at me:
VOICE: “Use every man after his desert, and who should ‘scape whipping?” PALAMEDES: (surprised) I like that. Is it from something? VOICE: Yes. It’s complicated.
Dulcinea (still being referred to as Voice at this point) is quoting from Hamlet and it’s VERY interesting to me that Palamedes doesn’t know it. He’s the Master Warden, after all! His knowledge base is incredibly deep, it’s kind of his whole thing — he quotes extensively from the book of Daniel like two pages later. I don’t think it’s a case of him somehow missing a pretty quotable line from one of Shakespeare’s most famous plays: I think, for some reason, John kept Shakespeare back when he did the resurrection.
John is (as far as I can remember) the only other character who references Shakespeare, he quotes Lear’s “how sharper than a serpent’s tooth” in Harrow. So we know he knows and is conversant in Shakespeare but, for some reason, he kept it from the Houses.
To me this raises two questions:
  1. Why?
  2. my immediate thought here is either that he wants to hide why he named Titania after the character in Midsummer, or one of the other Lyctors’ redacted names is too important in a play and he just had to scrap the whole thing (so C— is actually Cordelia or P— is Portia, etc)
  3. Who told Dulcie?
  4. we know she’s on the other side of the river, which we know Very little about. For her to still be learning after death, I think there have to be others over there with her, right? Either it’s Heaven and she met the real Billy Shakes, or like, idk, it’s the Lyctors who got eaten by the Resurrection Beasts and that’s how you cross over. I’m low on ideas here but it feels important— if anyone has any guesses I’d love to hear them!
submitted by bekahthesixth to TheNinthHouse [link] [comments]


2024.05.20 18:29 williamfzc Gossiphs: A Rust lib for general code file relationship analysis. Based on tree-sitter and git analysis.

What's it

Gossiphs can analyze the history of commits and the relationships between variable declarations and references in your codebase to obtain a relationship diagram of the code files.
It also allows developers to query the content declared in each file, thereby enabling free search for its references throughout the entire codebase to achieve more complex analysis.
https://preview.redd.it/q6hlqip7vl1d1.png?width=1288&format=png&auto=webp&s=98c484ab327150760e0b6821b7b7938979bd8cd1

How it works

In the past, I endeavored to apply LSP/LSIF technologies and techniques like Github's Stack-Graphs to impact analysis, encountering different challenges along the way. For our needs, a method akin to Stack-Graphs aligns most closely with our expectations. However, the challenges are evident: it requires crafting highly language-specific rules, which is a considerable investment for us, given that we do not require such high precision data.
We attempt to make some trade-offs on the challenges currently faced by stack-graphs to achieve our expected goals to a certain extent:
Gossiphs constructs a graph that interconnects symbols of definitions and references.
  1. Extract imports and exports: Identify the imports and exports of each file.
  2. Connect nodes: Establish connections between potential definition and reference nodes.
  3. Refine edges with commit histories: Utilize commit histories to refine the relationships between nodes.
Unlike stack-graphs, we have omitted the highly complex scope analysis and instead opted to refine our edges using commit histories. This approach significantly reduces the complexity of rule writing, as the rules only need to specify which types of symbols should be exported or imported for each file.
While there is undoubtedly a trade-off in precision, the benefits are clear:
  1. Minimal impact on accuracy: In practical scenarios, the loss of precision is not as significant as one might expect.
  2. Commit history relevance: The use of commit history to reflect the influence between code segments aligns well with our objectives.
  3. Language support: We can easily support the vast majority of programming languages, meeting the analysis needs of various types of repositories.

Usage

In addition to the Rust library, we also provide simple binary files for direct command line use, such as analyzing git diff.
input:
# diff between HEAD and HEAD~1 gossiphs diff # custom diff gossiphs diff --target HEAD~5 gossiphs diff --target d18a5db39752d244664a23f74e174448b66b5b7e # output json gossiphs diff --json 
output:
src/services/user-info/index.ts
├── src/background-script/driveUploader.ts (ADDED)
├── src/background-script/task.ts (DELETED)
├── scripts/download-config.js (DELETED)
├── src/background-script/sdk.ts
├── src/services/user-info/listener.ts
├── src/services/config/index.ts
├── src/content-script/modal.ts
├── src/background-script/help-center.ts

Repo

If you're interested in learning more, please feel free to visit our repo. Any suggestions are welcome. :)
https://github.com/williamfzc/gossiphs
submitted by williamfzc to rust [link] [comments]


2024.05.20 17:24 deffer_func Need Help Explaining My Startup's Infrastructure: Node.js, MongoDB, Redis, AWS Batch, ECR, External APIs

I’m the sole developer at my startup and have built our entire infrastructure. However, I’m having a hard time explaining how everything fits together. I have used a mix of technologies including Node.js, MongoDB, Redis, AWS Batch, ECR, and some external APIs for malware watch.
Here’s Our Setup:
Example Infrastructure (Simplified):
  1. User Requests: Begin with a user request
  2. Node.js Server: Receives and processes the request, interacting with other components.
  3. DocumentDB (MongoDB Compatibility): Queries or updates data as needed.
  4. Redis Cache: Improves performance by caching frequent queries.
  5. AWS Batch Processing:
    • Job Submission: Jobs are submitted to AWS Batch for intensive processing.
    • Job Processing: Managed by AWS Batch, running custom scripts.
    • Docker Containers: Jobs are executed within Docker containers, stored in AWS ECR.
  6. ECR Registry: Stores Docker images for batch processing.
  7. Batch Coordination:
    • Batch Completion: System waits until all jobs are completed.
    • Results Aggregation: Aggregated results are sent to the frontend.
Overall Workflow:
  1. User makes a request → 2. Node.js server processes request → 3. Queries/updates in DocumentDB → 4. Checks Redis cache → 5. Submits jobs to AWS Batch → 6. Jobs executed in Docker containers → 7. Containers stored in ECR → 8. All jobs complete and results aggregated → 9. Results sent to frontend.
What I Need Help With:
I’m looking for tools, diagrams, or techniques to help me better visualize and explain:
https://preview.redd.it/fb049j18rl1d1.png?width=1669&format=png&auto=webp&s=e24e0dbff8b857f7fb21a2140902becd05885cd5
submitted by deffer_func to node [link] [comments]


2024.05.20 16:58 Wiesnak20 I need help

Today i got these questions to do in cisco packet tracer, is this doable? Thank you and sorry English is not my first language Questions:Configure the network interfaces of your computer and network devices to allow access to the local network.
  1. Using connecting cables, connect computers and network devices according to the diagram shown in the figure on the previous page.
  2. Configure the router's network interfaces as recommended. If the router setup process forces you to change your passwords, record your login information after the change:
WAN-IP/mask 10.0.0.3/24, gateway-10.0.0.1/24, DNS - localhost;
LAN port no. 2-IP/mask 192.168.1.1/24.
Configure VLAN on router create VLAN 12 and VLAN 13 and add router port 2 to both VLANs with outbound rule marked.
Disable wireless and DHCP support.
  1. Configure the switch connected to the router as recommended. If the switch configuration process forces you to change your password, report the login information after the change:
IP/mask 192.168.1.2/24;
⚫ default gateway 192.168.1.1;
create VLAN with ID=12;
create VLAN WITH ID-13;
⚫ to VLAN 12 add ports 1 and 2 as untagged ports;
⚫ to VLAN 13 add ports 1 and 3 as untagged ports;
define the outbound rule for port 1 according to the tags in VLAN 2 1 VLAN 3; specify port 2 outbound rule on VLAN 2 as untagged: specify port 3 outbound rule on VLAN 3 as untagged;
⚫port 4 untagged (in access mode), assigned to VLAN ID-2.
  1. On the workstation connected to port 3 of the switch connected to the router, configure the wired NIC as follows:
⚫ network connection name INT1;
⚫ IP address/mask 192.168.1.3/24;
⚫ gateway address 192.168.1.1;
⚫DNS address 8.8.8.8.
  1. On the printer or IoT device, configure the wired NIC network interface as recommended:
⚫ IP address/mask 192.168.1.4/24;
⚫ gateway address 192.168.1.1.
  1. Check whether there is a connection from the workstation command line to the router and printer (IoT device).
  2. Save the report document on the medium described as EFFECTS and hand it over to the teacher for evaluation.
submitted by Wiesnak20 to ccna [link] [comments]


http://swiebodzin.info