Acad sheet templates

sheettemplates

2021.07.01 18:08 ravenlordkill sheettemplates

Share Excel and Google Sheet tips, tricks, templates, queries, formulas, dashboards. Request a template too.
[link]


2018.08.11 00:46 ssyeon0325 The New Generation of MCAT

Please use this platform to communicate with your fellow pre-med MCAT-ers! This page was created to revive MCAT! After the revival of MCAT, this page will be utilized as a additional platform to serve the pre-meds with their help on MCAT.
[link]


2013.01.20 21:06 Dark Souls 3

A community dedicated to everything about Dark Souls 3.
[link]


2024.05.14 06:59 NancyBlankenship [Get] Domont Consulting – Mergers and Acquisitions Toolkit Download

[Get] Domont Consulting – Mergers and Acquisitions Toolkit Download
https://preview.redd.it/qhsjiau7qb0d1.jpg?width=768&format=pjpg&auto=webp&s=ac1abcb5f41b7eee193481c64bc6d013ee32ba52

WHAT YOU GET?

This Toolkit includes frameworks, tools, templates, tutorials, real-life examples, best practices, and video training to help you:
  • Increase your M&A success rate with our 6-phase M&A approach: (I) Define your M&A strategy, (II) Identify target companies, (III) Build a business case and financial modeling, (IV) Conduct due diligence, (V) Execute transaction, (VI) Conduct post-merger integration
  • Define your M&A strategy: (1) Company mission, vision and values, (2) M&A strategic objectives and key performance indicators, (3) M&A team, (4) M&A guiding principles, (5) Target screening criteria
  • Identify target companies: (1) Potential target companies and data collection, (2) High-level assessment of potential target companies, (3) Shortlisted potential targets, (4) Financial statements analysis, (5) Business valuation: DCF model, comparable company analysis, and precedent transaction analysis, (6) Targets approved for the business case phase
  • Build a business case and an M&A financial model: (1) Strategic benefit, (2) Feasibility, (3) Financial benefit, (4) Comprehensive M&A financial model including acquirer model, target model, merger assumptions & analysis, and pro forma model, (5) Simple Financial model including integration cost, revenue synergy, cost synergy, NPV, ROI, and IRR, (6)Letter of intent or term sheet
  • Conduct due diligence(CDD) to identify the likely future performance of a company: (1) Work plan including key business case hypotheses and assumptions, (2) Due diligence to validate key hypotheses and assumptions, (3) Updated business valuation, (4) Recommendation to make (or not) a formal offer to acquire the target company
  • Execute transaction: (1) Deal structure, (2) M&A negotiations, (3) Signing and closing the M&A deal
  • Conduct successful post-merger integration to ensure the company reaches its cost and revenue synergy targets: (1) Post-merger integration strategy and high-level plan, (2) Post-merger integration detailed plans, (3) Implementation and monitoring
  • https://courseshere.com/download/get-domont-consulting-mergers-and-acquisitions-toolkit-download/
submitted by NancyBlankenship to u/NancyBlankenship [link] [comments]


2024.05.13 23:24 Grouchy_Carpenter489 Oracle Fusion Cloud ERP: It is time to forget about standard Excel sheets and take an enhanced data upload tool

Oracle Fusion Cloud ERP: It is time to forget about standard Excel sheets and take an enhanced data upload tool
A Time to Forget About Ordinary Excel Sheets and Take an Enhanced Data Upload Tool
Thousands of users worldwide of Oracle Fusion ERP use ADFdi and FBDI for data loading or data management generally. Excel has some great features that help to streamline data analysis. There is no argument that Excel is a highly functional tool for organizational data management.
Ordinary Microsoft Excel spreadsheets have many limitations regarding data loading to Oracle Fusion Cloud. Excel is great for simple ad hoc calculations, but it needs connectivity features to automate and document its contents, making its use prone to error.
Manually keying in data in Oracle Cloud from Excel worksheets or copy-pasting is a slow, time-consuming process that is bound to reduce employee productivity. Accuracy is also compromised, and inaccurate data can cost an organization millions in revenue. Excel needs more automation, so if you handle large volumes of data, there may be a better tool for you. Furthermore, data security is not assured since Excel does not have encryption features.
The standard Oracle tools (ADFdi and FBDI) are rigid in nature; the user cannot move columns around or even easily paste data from another sheet to ADFdi or FBDI. The error reporting and resolution cycle is too cumbersome and needs specialized technical knowledge.
Why do people still use Excel sheets for data management?
It’s cheaper
For a team that doesn't care about automation, why bother spending on something more costly if they can get away with something that stores data tables? Considering its limitations, is it worth it in the long-run cost?
Easy-to-use
Excel is easy to use. It is one of the basic Microsoft Office tools that most people learn to use in basic computer interactions. Because they are already familiar with it, most people find Excel easy to use and often prefer to do so than learn new about new tools.
Limited knowledge of what’s available
Some people are just stuck in their routines. They need help staying current on the newest software available on the market. If the leadership of a team or members does not take the initiative to look around and find out what the market has to offer, they will be stuck with Excel and its attendant costs when others are enjoying the benefits of more advanced tools.
Poor experience with some project management software
Choosing a data loading tool to suit your data loading needs is a task that should be taken seriously. Many data-loading teams that used Excel have been turned off by their previous experience with data-loading tools. Some tools are cumbersome and difficult to use, others are code intensive and not suitable for most end users, and some may need more features you are looking for. The poor experience is a result of poor customization.
Suppose you had a tool that allowed you to use the easy-to-use and familiar Excel worksheet while providing you with advanced specialized features for loading data into the cloud. Wouldn’t that be great?
How to make Excel work with advanced tools
Working with Excel in data loading does not have to be a slow and cumbersome process that does not ensure the accuracy or security of your data. You can harness the power of Excel and still enjoy using advanced data-loading tools. More4Apps and Simplified Loader are Excel-based data-loading to consider.
More4Apps
More4Apps is an Excel-based data-loading tool that allows businesses to integrate familiar Excel spreadsheets with Oracle EBS and Oracle Fusion. Its tools work within the familiar interface of Microsoft Excel, leveraging the many features of Excel to facilitate data loading.
Training is optional since Excel is the main interface, and end-users are familiar with it. Unlike ordinary Excel spreadsheets, which are limited in scalability, More4Apps empowers data owners to carry out mass data uploads and updates.
A plugin must be installed on a PC before you can use More4Apps. The IT Helpdesk needs to be involved in installing the plugin, so only specific PCs can be used.
More4Apps sends and receives data from the server hosted by More4Apps. Considering data security, allowing data transfers to a third-party server without ensuring the details are transferred is risky. Robust testing is required with every release of More4Apps update to ensure your data is transferred to a safe place. The IT Security department needs to get involved in verifying the third-party server and plugin.
Simplified Loader
~Simplified Loader~ is an Excel-based tool designed explicitly for uploading or downloading data to and from Oracle Fusion Cloud. The Simplified Loader template is easy to use. It includes a toolbar that contains operations specific to the template. The output of any operation is displayed in the Excel template's Load Status and Error Message fields.
Simplified Loader Excel files upload or download data from Oracle Fusion Cloud. Simplified Loader’s Excel templates are used either for mass data loads, for example, data migration, or everyday data loading activities in Oracle Cloud.
Simplified Loader ensures your data’s security by routing data from the Excel template directly to Oracle Cloud without a third-party server. The Simplified Loader template doesn’t need plugin installation and runs using Macros, similar to how other Oracle Cloud tools interact with Oracle.
Which template should you choose?
User convenience - Both More4Apps and Simplified Loader provide features that enhance user experience. Most UX features are similar in both products. Since they use Microsoft Excel, additional training is rarely necessary. More4Apps provides a form to input data that is not in the tabular format. Whereas the Simplified Loader provides a single unified sheet to enter data, the same sheet is used to invoke the list of values.
Both tools allow you to insert custom columns, hide or delete columns you don't need, and insert formulas you may need for data analysis. You can also analyze or validate data before uploading it.
Data Security - Oracle Fusion only allows interaction through APIs. Both More4Apps and Simplified Loader use APIs to interact with Oracle, so the security protocols are the same in both toolsets. More4Apps uses an external system to manage licenses. From the IT point of view, in a highly data-sensitive environment, the IT has to open additional ports to interact with the More4Apps servers to validate licenses.
In terms of data security, both toolsets have the same features.
License Management - This topic is considerably different in More4Apps and Simplified Loader. More4Apps restricts the number of times an administrator can update users licensed to use the Simplified Loader template, whereas, in Simplified Loader, the Administrator has full control over maintaining the users licensed to use the Simplified Loader templates.
Support—Both organizations offer excellent support to users who log defects using the support system. Simplified Loader has a vast library of short videos demonstrating product features and functionalities. More4Apps has recently adopted the approach of video tutorials.
Plugin installation - This is a key difference between the two templates. The More4Apps template requires an additional plugin installed on the user's machine. The user will always see an additional toolbar in Excel when working on any Excel document. The user always has to use the PC where the plugin is installed. In comparison, the Simplified Loader Excel doesn’t need any plugin installation on the user’s machine. When the user opens the Simplified Loader file, the Simplified Loader toolbar appears. Users won’t see the additional toolbar when they open any other Excel file.
Using Excel parallelly: When using either toolset, Excel cannot be used for any other purposes. The user has to wait until the data is loaded to Oracle.
Pricing: Both toolsets offer per-user licensing. More4Apps offers licenses per user by module, whereas Simplified Loader offers licenses per user by Template. License management at the template level gives the administrator higher control to assign the right user to the right template, resulting in purchasing the right number of licenses per user. The More4Apps licenses are considerably higher (more than 5x) than the Simplified Loader licenses.
Conclusion
Using ordinary Excel spreadsheets for data loading may not be very effective. Excel may have shortcomings, but you can use it efficiently with advanced data-loading tools to get the best of both applications. Both More4Apps and Simplified Loader provide similar features for loading data in Oracle. Both are advanced data-loading tools that make your experience more pleasant and effective. Simplified Loader is more handy as it does not need plugin installation, and the user doesn’t need any involvement from IT to install the plug-in.
submitted by Grouchy_Carpenter489 to u/Grouchy_Carpenter489 [link] [comments]


2024.05.13 23:12 bambaazon Logic Pro 11.0 release notes

New Features and enhancements
New AI-enhanced tools join Smart Tempo and the Pitch Correction plug-in to augment your artistry.
Bass Player and Keyboard Player join Drummer to complete a set of Session Players — all built with AI making it easy to create performances that respond to your direction.
Session Players can follow the same chord progression using Global chord track.
Add warmth to any track with ChromaGlow, an advanced plug-in with five saturation models designed to simulate the sound of vintage analog hardware.*
Separate a stereo audio file into stems for vocals, drums, bass and other parts with Stem Splitter.*
Session Players, ChromaGlow, and Stem Splitter also come to Logic Pro for iPad 2 — making it simple to move between projects created in Logic Pro for Mac.
Play any of six deeply-sampled acoustic and electric basses with Studio Bass.
Perform any of three meticulously-sampled pianos with Studio Piano.
Loops that contain chord tags will automatically populate the chord track when added to a project.
Three new Producer Packs are available: Hardwell, The Kount, and Cory Wong.
Original multi-track project of Swing! by Ellie Dixon available as in-app demo song.
Downmix and trim options allow custom mixing for non-Atmos channel configurations.
Exported ADM BWF files have been expanded beyond Dolby Atmos and can contain settings for stereo and other multi-channel formats.
Bounce in place adds automatic real-time recording for External Instrument regions or tracks that utilize external hardware using the Logic Pro I/O plug-in.
Route MIDI signals generated by supported software instruments and effects to the input of other tracks for creative layering during playback or recording.
Edit more efficiently using key commands for moving, extending, or resizing marquee selections.
The Nudge Region/Event Position key commands now also nudge Marquee selections.
The Transpose Region/Event key commands now also move or expand the Marquee selection up/down.
Pattern regions can now be created on Drummer tracks, and Drummer regions can be converted to Pattern regions.
New key commands include Trim Note End to Following Notes (Force Legato) With Overlap and Trim Note End to Selected (Force Legato) With Overlap.
Bounce in Place and Track Freeze can now be performed in real time, allowing for use of external instruments, I/O plug-ins, and external inserts.
Mastering Assistant analysis now can be performed in real time, allowing for use in projects that incorporate external I/O or instruments.
The Dolby Atmos plug-in now offers Downmix and Surround/Height Trim controls.
The Recent Projects list can now be configured to show up to 50 projects.
* Requires a Mac with Apple silicon.
Stability and reliability
Scripts with 1071 characters or more in Scripter no longer cause Logic Pro to quit unexpectedly.
Fixes an issue where creating a an event in a lane assigned to Note off in Step Sequencer could cause Logic Pro to quit unexpectedly.
Fixes an issue where Logic Pro could fail to launch with an Error Initializing Core MIDI message when the system is under heavy load performing other tasks.
Resolves an issue where Logic Pro could quit unexpectedly when a 64-bit floating point IR file is loaded into Space Designer.
Fixes an issue where Logic Pro could hang when opening a project while the Project Settings > MIDI window is displayed.
Logic Pro no longer quits unexpectedly when creating multiple Aux tracks with multiple existing Aux tracks selected.
Improves stability when bypassing control surfaces with Musical Typing open when EuControl software is installed.
Fixes an issue where Logic Pro could hang when quitting a project containing a large number of instances of Sampler.
Fixes an issue where Logic Pro could quit unexpectedly when replacing a playing Live Loops cell with another loop.
Performance
The UI is now more responsive when adjusting Flex Pitches directly on regions in Deviation mode.
Performance is improved when editing Transient Markers in Take regions with Flex enabled.
Performance is improved when making Flex Pitch edits in the Tracks area with a large number of selected regions.
Alchemy's Performance is improved.
Performance is improved when moving regions in projects with a large number of tracks and regions.
Projects containing a large number of flex-pitched regions now open more quickly.
Resolves an issue where loading a project saved with a Summing stack selected that contains Software Instruments that have no regions and/or with the tracks turned off could load the Software Instruments into memory.
Accessibility
VoiceOver now announces the state of Automation mode buttons on channel strips.
VoiceOver now announces the status of the Pause button in the LCD.
VoiceOver no longer announces hidden controls in the Smart Controls view.
VoiceOver no longer reads the values of pan knobs that are currently hidden in Sends on Faders mode.
VoiceOver now announces the state of the Details button and the Follow button in the Drummer Editor.
VoiceOver now announces left-click and Command-click Tool selections in the Control Bar.
VoiceOver now announces the name of the Time Quantize button in the Piano Roll.
VoiceOver now announces changes in value when the Next/Previous key commands are used to change Quantize values.
VoiceOver now announces state of key commands for Cycle, Mute, Track Solo, Input Monitoring, Track On/Off, and Lock/Unlock Track.
VoiceOver now announces the selection state of focused tracks.
Spatial Audio
Fixes an issue where adding a new 3D Object track for the first time to a Spatial Audio project could cause the Renderer to switch from the current model to the Apple renderer.
The Dolby Atmos plug-in now offers a 5.1.2 monitoring option.
Fixes an issue where setting a project to Dolby Atmos could output to 7.1.4 even when the mode defaults to Apple Renderer.
It is now possible to monitor Dolby Atmos projects directly via HDMI to a surround capable receiveamplifier.
The metering for Height channels now shows as post-fader on the Master channel as expected.
Loading a Master Bus channel strip setting in the 7.1.4 channel format now preserves the 7.1.4 channel layout as expected.
Session Players
Resolves an issue where loading a user-created Drum Machine Designer patch could set the input to a bus and fail to load the Drum Machine Designer instrument.
Using the Create Drummer Region command in a Marquee selection now creates a region that corresponds to the Marquee.
Smart Tempo
In cases where there is not an existing Smart Tempo Multitrack Set, selecting an audio file in the Smart Tempo Multitrack Set window and disabling the “Contribute to analysis” check box now causes the Update button to change to Analyze as expected.
Pressing the Space bar now immediately stops a Free Tempo recording.
Fixes an issue where projects previously open in the same Logic Pro session could unexpectedly affect “Contribute to Analysis” in the Smart Tempo editor.
Recording
Audio regions recorded to unnamed tracks now include the project name and track number in their name.
Mixer
The channel strip Stereo Pan control and the Pan menu now can be adjusted when Caps Lock is enabled.
Creating a single Multi-timbral Software Instrument in the New Track Sheet no longer creates two Software Instrument instances in the All view of the Mixer.
Resolves an issue where remaining tracks in a Multitimbral Software Instrument Track Stack could unexpectedly rename the channel strip.
Adjusting the activity status of a speaker in the Surround panner no longer causes the signal to unexpectedly mute.
Groups now immediately show as inactive when switched off for a selected set of channels in the Mixer.
Metering now correctly works on individual channel strips with plug-ins that send to more than two channels and are routed to a surround bus.
Option-clicking on a send in a selected group of channel strips now sets all corresponding sends to 0 dB as expected.
Fixes an issue where performing Undo after adjusting the fader values of grouped channels with Group Clutch enabled and then disabled could cause the faders to jump up to +6 dB when one member of the group is touched.
Setting multiple selected channels to No VCA now works as expected
Alchemy
The oscillator section in Alchemy offers a new Wide Unison mode.
All controls for Additive Effects now accept typed-in values as expected.
Values typed into parameters related to milliseconds (MS) in Acoustic Reverb are no longer interpreted as full seconds.
Resolves an issue where performance control destinations for modulation could show as duplicated.
Sampler, Quick Sampler, and Quick Alchemy
The Playback direction button in Quick Sampler now immediately updates when clicked.
The view now scrolls correctly when dragging the Trim marker in Sample Alchemy.
It is now possible to adjust the level of a group in Sampler up to +24 dB.
The Up/Down buttons for navigating zones in Sampler now remain available after adjusting the start or end positions of samples.
The general Zoom/Scroll key commands now can be used to trim the current view in Sample Alchemy.
Handles and Trim Handles in Sample Alchemy behave correctly when click-dragged, even when the plug-in window does not have focus.
The Ancient Vocal Chop and Baily Glide plug-in settings for Quick Sampler now open in Classic mode, as expected.
Plug-ins
The MIDI Scripter plug-in now shows in Logic Pro when running in dark mode.
Fixes an issue where clicking on Sampled IR in Space Designer could activate Synthesized IR mode unexpectedly.
Resolves an issue where repositioning the playhead could cause audio to cut out on channel strips that use Step FX.
The preset Note Repeater in Scripter now works as expected.
The wet/dry setting on Ringshifter is now always set to 100% wet when inserted on an Aux.
There's now a DI Delay Compensation switch in Bass Amp Designer to improve phase correlation when blending between Amp and Direct Box in the plug-in.
StepFX now includes presets using Sidechain.
The Beat Breaker preset called “Basic / 2 Slices, Speed 66%” no longer plays the slices at 50% speed instead of 66%.
Resolves an issue where ES2 could produce glitching sounds when using Sine Level or Poly Voice mode on Apple Silicon computers.
Mono > Stereo instances of Console EQ no longer can cause unexpected feedback.
Using the Delete all Automation key command while an Audio Unit window has key focus no longer causes the Audio Unit window to go blank.
The menu for the compression section of Phat FX can now be opened by clicking on the Up/Down arrows.
Beat Breaker now offers new default patterns divided evenly into 2, 4, 8, 16, and 32 slices.
Mastering Assistant
There is no longer unexpected latency with bounces from projects that use the Clean or Clean + Excite mode in Mastering Assistant.
Mastering Assistant analysis is no longer incorrectly triggered in projects that contain no regions, but are previewing audio from Ultrabeat, etc.
Mastering Assistant no longer allows the -1 dBFS peak limit to be exceeded in certain cases.
Automation
The Consolidate Relative and Absolute for Visible / Automation menu item now only displays when automation types that support relative automation are active in the lane.
Region-based Automation is now pasted as Track-based Automation when pasted to an area of a track that does not contain regions.
Pitchbend now works as expected with zones in Sampler that do not have Flex Pitch enabled.
Selecting Region-based automation points on a region now deselects previously selected automation points on other regions
Disabling Region-Based Automation no longer dims the Power button for MIDI CC data lanes in the Piano Roll.
The movie window now updates to show the correct frame when moving Region-based automation points.
The Autoselect automation parameter now works as expected when clicking any plug-in control.
Automation of the Gain plug-in no longer exhibits unexpected latency.
Region-based automation is now drawn correctly when recorded into projects that start earlier than 1 1 1 1.
Automation lane views for all tracks are now maintained when switching into Flex view and then back to Automation view.
Flex Time and Flex Pitch
Flex Pitched notes now play as expected when clicked while Record or Input Monitoring is active on the track.
Flexed audio tracks using Monophonic or Slicing mode no longer produce clicks at tempo changes.
Takes and comping
Fade-ins are now applied when flatten and merge is performed on Comps.
Renaming a take that encompasses the entire length of an audio file no longer unexpectedly changes the file name.
Comps in Take Folders are now preserved when performing Cut Section Between Locators on a section that includes the end of one Take folder and the beginning of another, with a gap in-between.
Track Stacks
Record-arming a Track Stack now arms grouped audio tracks in a Track Stack it contains.
Dragging a subtrack out of a Track Stack that is assigned to a VCA now removes the assignment for the subtrack.
Fixes an issue where Track Stacks could sometimes be dimmed when some, but not all, subtracks are muted or off.
It's now possible to replace stacked instrument patches that are inside a Summing Stack with single track patches.
Track Alternatives
Loading a patch on a Summing Stack containing sub-tracks with Track Alternatives no longer causes inactive alternatives to be deleted.
Track Alternatives can now be created for the Stereo Output track.
Selection-Based Processing
Using Selection-Based Processing on a Marquee selected section within a Take Folder no longer creates an unexpected comp.
Selection-Based Processing on a comp now retains the comp.
Score
The spacing of notes is improved in cases where there is a dotted note on a line with the stem is pointing upward.
Command + Z to undo now works after deleting a Score Set.
Upward bends in TAB staves now display correctly.
Importing an instrument track no longer can cause Score Sets in the current project to disappear.
Imported Score Sets can now be deleted from a project.
Live Loops
“Join Region and Fill Cell” now works as expected.
Recording a performance in Live Loops now temporarily puts all tracks into Automation: Latch mode.
Fixes an issue where changing patches for a Live Loop track could cause the length of cells to change unexpectedly.
It's now possible to paste MIDI notes into a Live Loops cell.
Step Sequencer
It's now easier to use the disclosure triangle to open sub-rows in Step Sequencer.
Pattern regions now play back correctly immediately after being nudged.
Pattern Regions now immediately play as expected after using the Slip/Rotate tool to drag their contents to the left.
The “Separate pattern region by kit piece” command on Drum Machine Designer tracks is now applied to the correct area of the Pattern Region, in cases where the left border of the region has been moved to the right.
The length and number of steps of a newly created Pattern Region accounts for Time Signature changes correctly.
The maximum possible pattern length of a Pattern region is now 4 bars of the current time signature.
Step Sequencer now allows pattern lengths to be added based on 5/4 and 7/8 time signatures.
The Step Sequencer Inc/Dec controls now work in Loop Edit mode.
Fixes an issue where Pattern Regions on frozen tracks be edited unexpectedly.
Region-based automation now displays properly on Pattern regions in tracks that have been partially frozen, and on regions that have been frozen and then unfrozen.
It's now possible to assign MIDI channels per step in a Pattern Region.
MIDI
Reset messages for Software Instruments now work correctly.
Sustain messages are now sent correctly when playing back regions with Clip Length enabled in cycle mode.
There is now an “Internal MIDI in” setting in the Track Inspector to allow for recording MIDI from any other software instrument or External MIDI Instrument track.
The “Send all MIDI settings” key command now sends program changes to external devices assigned to empty tracks.
Resolves an issue where 3 bytes of random MIDI data would be sent when playing back regions containing SysEx data with MIDI 2.0 disabled
New 'internal MIDI in' feature allows recording of MIDI from other tracks, including MIDI FX plug-in output and 3rd party MIDI generators.
The “Delete MIDI events outside region boundaries" key command now correctly creates a starting CC event in the region to match the last matching CC of the same type in the track.
Fixes an issue where Chase could cut off notes that are preceded by notes of the same pitch on tracks with third-party instrument plug-ins.
Editing
The Humanize transform set now works as expected when the Randomize functions for Position, Length, or Velocity are set to very small values.
The menu item Delete and Move in the Event List is now only displayed if regions are displayed in the window.
When MIDI 2.0 is selected in the Settings, clicking on an Event in the Event List no longer plays events back with MIDI 1.0 resolution.
Fixes an issue where using the Cut command in the Audio Track Editor could switch the view to another editor.
When a region in the Project Audio window is double-clicked, the Audio Track editor now opens as expected.
The content link buttons for the Piano Roll and Score show the correct color as expected when toggled using the mouse.
The Event List correctly updates to reflect changes made by using key commands to select notes in other editors.
Resolves an issue where the Velocity tool in the Piano roll could affect the values of non-note events.
Fixes an issue where applying the Transform set Double Speed could cause the notes to disappear from the Piano Roll.
Step Input
Extending the length of note entered using Step Input now works correctly.
Global Tracks
Adding multiple audio Apple Loops of the same key to different tracks of a new project now changes the project key as expected.
Clicked in Tempo points are now placed at their correct positions in projects that start earlier than 1 1 1 1.
Share and export
When No Overlap is enabled, regions bounced onto existing regions no longer overlap them.
Audio files bounced from Logic Pro now include the proper Encoded Date in the metadata.
Fixes an issue where MIDI regions could be truncated when bounced in place.
Fixes an issue where audio files including Volume/Pan automation exported from mono tracks that use plug-ins could export as stereo files.
It is now possible to bounce sub-channels of multitimbral instrument tracks as individual files.
Import
Resolves an issue when dragging multiple audio files into a project, choosing the “Place all files on one track” option could create a second track and places the first file on one track, and the rest on the second.
Output channels in the Mixer can now be imported from other Logic Pro projects.
Apple Loops
The Loops browser now correctly shows the same enharmonic key an Apple Loop was tagged with.
Apple Loops now preview using the Key Signature active at the current position of the playhead.
It's now possible to add Aliases to bookmarks and untagged loops.
Dragging an Apple Loop from the loop browser to an existing track no longer changes the input for the track.
Fixes an issue where MIDI Apple Loops could jump to the start of the nearest bar position when dragged from the Loop Browser to the middle of a bar.
Video Support
A secondary screen that is running a full screen video with Show Animations off will no longer remain black after closing the project.
Key Commands
The “Increase (or Decrease) last clicked parameter” key commands now work for controls in the LCD.
The “Record off for all” key command now works on Software Instrument tracks in cases where one or more audio tracks are also record-enabled.
There is now a key command to add to the current selection of regions or cells that are assigned to a toggle solo group.
The Zoom Toggle key command now works in the Step Editor.
Compatibility
GarageBand projects that use Pitch Correction now sound the same when opened in Logic Pro.
Undo
If Undo is used immediately after creating a project, the New Track Sheet is displayed as expected rather than leaving a project with no tracks.
Undo/Redo now works as expected with Audio Unit v3 plug-ins.
Changing the Automation Mode, or changing a Track On/Off state now creates an Undo step.
Performing Undo after adding a surround track no longer corrects Drummer tracks in the project.
Logic Remote
Logic Remote immediately updates to show time and signature changes made in Logic Pro.
Control Surfaces and MIDI controllers
Controls on Control Surface devices that use Lua scripts now provide feedback when learning assignments for them in Logic Pro.
Illuminated buttons on control surfaces now show the correct state for Show/Hide Track Editor.
General
The LCD now displays the Cycle start and end times in both SMPTE time and Bars/Beats when the secondary ruler is displayed.
Search in the All Files browser now finds matching items in bookmarked folders.
Fixes an issue where the visible editor in the Main window could unexpectedly switch when rubber-band selecting regions.
Audio Take folders created in Cycle mode now loop as expected after recording when Loop is enabled in the Region Inspector.
It's now possible to create external MIDI tracks when Core Audio is disabled in Logic Pro.
Resolves an issue where deleting a Flex marker from an audio region while a Marker List is visible could switch the key focus to the Marker List.
Track information pasted into a text editor now includes the TIME position when the Use Musical Grid setting for the project is not enabled.
Input monitoring buttons are now displayed on audio tracks when Logic Pro has fallen back to an alternate audio device because the selected device is not available.
Previewing an audio region in the Project Audio window no longer causes it to jump to the top of the window.
Command+Option clicking on the On/Off button of a track now toggles the button for all tracks, as expected.
Copy/paste of regions now works when Automation view is enabled.
Right-clicking on a looped segment of a region now opens the contextual menu as expected.
It's now easier to see when black keys are depressed in the Musical Typing window.
The right arrow key now reliably moves the text cursor in the Bounce > Save As file name panel.
Groove Templates created from audio regions now work in Smart Quantize mode.
Dragging multiple regions from the same audio file from the Project Audio browser to the Tracks area now works correctly.
Audio regions are no longer moved to unexpected positions when trimming, if absolute Snap mode is on, and the region anchor is moved away from the start of the region
Fixes an issue where pasting a Marquee selection with No Overlap and Snap Edits to Zero Crossings mode enabled could delete a non-overlapping part of an existing region.
Autozoom now triggers when a region's upper right corner is dragged in the Main window, or the Audio Track Editor.
The Playhead no longer may briefly appear to be in the wrong position when zooming horizontally.
The Time Ruler now immediately updates to reflect changes made to the “Bar Position [bar position] plays at SMPTE” setting.
The File browser correctly shows the full path when using Save As.
submitted by bambaazon to Logic_Studio [link] [comments]


2024.05.13 22:41 SolongLife A Guide to Free Bloomberg Terminal Alternatives

In today's fast-paced financial world, a lot of tools are available to assist both novice and seasoned traders and investors. These tools, often free or low-cost, offer a range of features that can help navigate the complexities of the financial markets. This guide provides insights into some of the most effective and accessible resources in the realm of finance.
Financial Modeling and Analysis Tools:
Koyfin: This free dashboard is reminiscent of FactSet, offering detailed analysis, including macroeconomic data, security snapshots, and various financial estimates. The data quality is notably high.

Atom Finance: Known for its user-friendly interface, this tool offers comprehensive features like a detailed company events calendar, interactive valuation tools, and financial modeling capabilities. However, it covers a limited number of companies.

Aswath Damadoran: Offers a collection of financial data across various sectors, including capital expenditure and valuation ratios.

Finviz: Provides a quick overview of financial data such as dividends, sales growth, and analyst ratings.

Ycharts: A trial-based service, similar to FactSet, providing extensive financial information.

Eloquens: A platform for accessing free financial models and templates.

Yahoo Finance: Known for its revenue and EPS estimates, along with real-time pricing data.

Government Filings (EDGAR): An official source for accessing U.S. government public filings like 10-Qs and S-1s.

IMPORTXML & GOOGLEFINANCE in Google Sheets: These tools allow for importing structured data and basic live financial data, respectively.

Market Research Tools:
ABI Research & IDC: Offer high-quality research, sometimes partially free.

Gartner: Known for its informative free webinars, providing more depth than their online print material.

Statista: A comprehensive source for statistics from various industries.

Markets and Markets & Research and Markets: Provide initial research, especially for niche industries.

TAM Workshop: Offers a tutorial on market sizing with creative data-finding resources.

Macro Data Tools:
Trading Economics: Offers macro data by country and includes forecasts.

World Bank: A vast database covering a wide range of economic and social statistics.

BLS (Bureau of Labor Statistics): Provides U.S.-specific statistics like CPI and unemployment rates.

Investing. com: Features data on bond yields and credit default swaps.

VC/Startup Databases:
Fundz & Crunchbase: Offer detailed information on funded startups and private company investments.

IPO Data by Professor Jay Ritter: A comprehensive database on various aspects of IPOs.

TechLeap: A European-based database focusing on startups.

Stock Research Reports:
Morningstar: Accessible through a free trial, offers in-depth research reports.

ValueInvestorsClub: A platform for accessing detailed company research.

SeekingAlpha: Features a mix of casual and professional research, often including excerpts from detailed reports.

These tools collectively serve as a potent alternative to costly platforms like Bloomberg terminals, offering a wealth of information to finance professionals and enthusiasts alike. They are designed to cater to a variety of needs, from market research to financial modeling, making them invaluable in the dynamic world of finance.
submitted by SolongLife to TraderTools [link] [comments]


2024.05.13 20:17 ahmed_samir_gho Payment terms

My invoice has a net 30 payment term, but the payment terms in my invoice sheet template are 45. Is this fine, or should I change it to 30?
submitted by ahmed_samir_gho to Welocalize [link] [comments]


2024.05.13 19:12 JoelCanon [JB CM SEPT ß-3/10] Septenary 'Fundamentals Extended 1.1'

[JB CM SEPT ß-1/10] Septenary REDDIT
[JB CM SETP ß-2/10] Septenary 'Fundamentals' REDDIT
CM Septenary DRIVE
A good, good morning/afternoon/evening to all the people of worldbuilding! How is everyone doing this morning/afternoon/evening? I hope the answer is great, because all of you are truly great and deserve the great things in life. Today is an especially great day because it's day 3 of my release of 'Septenary,' a creative tool which comes as part from another project of mine, 'Canon Mode.' The first two links are the first two Reddit posts related to Septenary, the third one is the direct link to the files within this project on Google Drive. Today we'll be opening the second file in either two of the 'CM Septenary COLOGRAYSCALE' folders, the pertaining file is called 'CM SEPT 2 Fundamentals Extended SHEET COLOGRAYSCALE'
CM SEPT 2 Fundamentals Extended SHEET COLOR
CM SEPT 2 Fundamentals Extended SHEET GRAYSCALE
The first thing you'll notice about either of these spreadsheets is they display the 'FUNDAMENTALS Extended 1.0' septenary, except here the cells have been greyed out. This formatting decision was done to illustrate that each column within the fundamentals septenary is not the main focus of this spreadsheet, but are now instead markers for extended sub-septenary groups. There are seven main groups in the 'Fundamentals' septenary and each one has its own extended list of sub-septenaries which help define the main groups.
The greyed out 'FUNDAMENTALS SEPTENARY 1.0' septenary is marked with a '1.0' indicating that it's the root septenary for the rest of the septenaries in the spreadsheet. The second septenary, called 'CHAMPION 1.1,' is marked with a 1.1. This indicates that it's the first septenary to be taken into account for after the root septenary. This post will further explore the 'CHAMPION 1.1' septenary in further detail.
(CHAMPION)
If we look at the 'CHAMPION' marker we can see listed below are its usual seven terms which will act as further markers when used to cross reference other cells within the septenary. If you didn't see yesterdays post, here's a refresher of what the actual definitions of these champions are.
0. Champion: a champion is masterful
  1. Saint: a saint is benevolent
  2. Chief: a chief is leaderly
  3. Sage: a sage is intellectual
  4. Hero: a hero is selfless
  5. Seer: a seer is a visionary
  6. Noble: a noble is honorable
  7. Heretic: a heretic is malevolent
The interesting thing about the 'CHAMPION' septenary is its creation wasn't inspired by mythical or legendary figures, but instead the inspirations for this group actually came from the desire to make a septenary that describes basic and fundamental occupations. When first considering how to create this particular septenary to work as an aid for character creation the important factor here was, "what will they be doing?" That is, what are these characters going to actually be doing once people create them?
{CHAMPION}
There were a few different contenders for the name of the 'CHAMPION' group. Other titles also considered apart from champion were master, legend, hero, and maybe a few more unconventional ones. So what is a champion anyway? Well, it's someone who is victorious. This definition had a really nice overtone of implying someone or something as the ultimate, and greatest that there is. And that's what was wanted to be portrayed alongside with these elements in the list is that these individuals are the absolute best at what they do and have no contenders.
{Trait}
The trait septenary acts as the base defining characteristics of each champion. These are the starting points for building a champion. From here these can act as base traits which can be used to further branch out and find more related traits, behaviors, or qualities that correlate with the type of character one is wanting to build.
{Occupational}
This septenary describes the type of work your champion might be doing. Outside of actually having a literal profession these descriptions can go much beyond in what they're able to explain about what their occupational time is spent doing.
{Class}
This group is much like the same idea of social status. This septenary defines your champions place or position in society.
{Rank}{Title}
These two groupings can be a little difficult to understand without the context of what they were created for. These specifically were meant to illustrate the original seven champions as they were in the fictional universe Dokimi. In the lore of this universe the first seven champions exist at the beginning of time as manifestations of the seven axiom forces. These beings were deities, and existed as the origin of Dokimi. They each had a rank and title, which went as the following.
  1. Lord Saint the Savior
  2. Executive Chief the President
  3. Elder Sage the Legend
  4. Master Hero the Justice
  5. Mother Seer the Universe
  6. Queen Noble the Honorable
  7. Deviless Heretic the Princess
In Dokimi each of these seven champions live up to their rank and title, as each have complete control and influence across their respected domain.
[Conclusion]
It could be said that a champion is always victorious, because if they weren't then they'd no longer be the champion of whatever they're involved in or with. The 'CHAMPION' sub-septenary is meant to provide a strong and potent backbone for creating a champion who is truly worth of their weight in stature and esteem. With this template/outline then so too might you be able to create your own strong and imposing champion worthy of the legends. I hope you've had or will have had fun with this Reddit edition of 'Septenary.' Until next time, happy world building/universe creating to all of you!
submitted by JoelCanon to worldbuilding [link] [comments]


2024.05.13 15:52 No_Fox_Given82 A rough template for the new players who are getting lost.

Save (file - download) yourselves a copy to your phone or PC or whatever and edit your mods into it.
I sent this sheet to a friend and he built a perfectly smooth LO using it, since then 3 more people have used it with the same result. It is just the general LO that I usually follow as a very basic template.
I see a lot of posts since being back about load ordering and asking for help, of course a lot of players will be new to the scene after the TV series blew the roof off. Choosing your mods is hard, getting them to work together is even harder. It's great that new players are picking this game up and learning how to mod their game!! But do understand that when you guys ask for help, it is so hard to help because it takes weeks or months to build your own load order and get that working and that's hours on end of messing around with things so offering remote assistance to someone on a forum is also really, really difficult.
Of course, there is no such thing as the right way to build a load order, all mods will play differently with one another, 2 people could have the exact same load order, playing on the exact same platform and 1 of them could run like a dream and 1 of them could run like crap, that's just modding for you.
Try to think of modding as a set of scales. On one side of the scales you have the functionality and stability of the game. On the other side of the scales you have the mods and all the adjustments they make, it's about balancing those scales to perfection, too far one way and the scales tip. Sometimes you have to sacrifice some things, save them for another play through etc. You might have to look at a bunch of game play changes and think "okay, these are all a bit much, which ones do I really want and which ones can I do without this time". Or maybe you want all these lush textures but they are clogging up the frames, so you just need to ditch some (textures are heavy lol).
Disclaimer - I'm not saying this is how everyone should build it, it's not like I'm putting this here so people can say "oh, no I think it should be in this order".. I am simply putting this here for the new player who is struggling to get an idea of how things go and if just a few of those players use this template and get a good load order off of it then that's a win for me :)
submitted by No_Fox_Given82 to Fallout4ModsXB1 [link] [comments]


2024.05.13 14:58 gringomofo Salary distribution organizer template for sale (google sheets)

Its almost payday!
I made a salary distribution organizer template so u can manage ur sweldo every payday. U can use this para makita mo pano mo i aallocate out yung sweldo mo sa savings, investments, and other goals every payday.
This is good para ma visualize yung pera mo in a table format and kung meron ka pang remaining until the next payday... this is all automated so di hassle and it is only for 100 pesos. so if ure interested just send me a msg :))
submitted by gringomofo to classifiedsph [link] [comments]


2024.05.13 14:49 gringomofo Salary distribution organizer template for sale (google sheets)

Just in case ure interested, I made a salary distribution organizer template so u can manage ur sweldo every payday. U can use this para makita mo pano mo i aallocate out yung sweldo mo sa savings, investments, and other goals every payout. This is good para ma visualize yung pera mo in a table format and kung meron ka pang remaining until the next payout... this is all automated so di hassle. this is only 100 pesos and I just actually added some features on it... sooo if ure interested you can msg me :))
submitted by gringomofo to phclassifieds [link] [comments]


2024.05.13 14:26 Lopsided_Grass_7546 A Vuetify Bug Component Not error show fix error ...

A Vuetify Bug Component Not error show fix error ...
https://preview.redd.it/2ig2dma1t60d1.png?width=900&format=png&auto=webp&s=9024c9ed967b224f8998eb5d7e8e7aa0dda75c82
Based on the provided context, it appears that there is a bug being experienced with Vuetify and Vue.js components. Specifically, the register-gallery component is not being recognized and no error is being shown.
https://preview.redd.it/agme212ut60d1.png?width=1920&format=png&auto=webp&s=554ba01eff2f7f183b9b37e753ca5102d914fa53
A Not Show Error Page
submitted by Lopsided_Grass_7546 to vuetifyjs [link] [comments]


2024.05.13 14:11 FochingGreatStache How effective is it to use data to tackle trauma?

The tl;dr is that I have dissociative identity disorder, and I have been trying to quantify the trauma held by most of my alters as otherwise we end of playing therapeutic "whack-a-mole." In an effort to attempt to address issues, I attempted to spreadsheet all of my issues. While I tried to optimize it for DID, I don't see any reason why it can't also be used for dealing with parts work or for single individuals? I would love feedback on how accurate or effective the model is, and suggestions for any changes. Admittedly, I am an odd bird who copes through intellectualizing and systematizing problems.
If I am being honest, I would say that 60-70% of the work here is therapy and/or mindfulness. However, I think the insights I have gotten from the data have been helpful. It also seems to me that this can be extremely useful to individuals who feel that therapy isn't "concrete" enough for them. I am not one of those individuals, but I know plenty who are. I would appreciate any and all feedback, and thank you all in advance for your time! Hello! I am not sure if what you are about to see is going to be comprehensible to anyone but me. But, if it at least prompts people to consider the way we try to deal with trauma, then that will be sufficient for me. I would also love feedback about my work and would welcome any suggestions on things we need to model better. Here is the template that is available for use if you are interested:
Trauma Database Tempalte
The tl;dr here is that I have created a spreadsheet that tries to do a few things: track all traumatic issues in the system.
1) score each issue 1 - 5 on the extent to which the issue impacts each alter. (Sometimes, they can give the issue a score of up to 10 in select circumstances.
2) categorize each issue based on who or what it is connected to.
3) assign each person that the trauma is connected to a trigger score. This is based on the idea that you can have extreme trauma that is scored at a 5, like physical abuse, but perhaps be insulated by the reminders of it on a daily basis. On the other hand, maybe you have more mundane trauma with your mom that scores a two, but you talk to her everyday. I multiplied the trauma score by the trigger score to give me a kind of composite number that operates almost like a threat index. That tells me where the fires are.
4) provide an inventory for all issues held by all alters.
5) determine numerically what the most significant triggers are for the system.
I started this because I was working through trauma that emerged with someone I have a current relationship with. Dealing with that trauma is hard enough. Doing so while having roughly two dozen alters sometimes feels next to impossible. It often feels like the process of trying to manage that is almost like a second job. This person has been trying extremely hard to change, and I know they never intended to hurt me. But the consequences of their actions have been devastating. However, I have been finding that whatever it is that I do appears to have limited value. I have journaled approximately 800 pages, and I have taken steps for the trauma-holding host to go on a sabbatical while others front for him while he processes the baggage. There has been no noticeable difference. We have tried to work through the baggage with the trauma-holding host, and it hasn't worked. Normally in life, when I encounter a significant problem I try to "spreadsheet it." I know that for some people it is intimidating -- and I am first to admit that I do not make the most user friendly sheets. But they make sense in my head. In quantifying and itemizing trauma, I made some important conclusions that might be helpful in dealing with issues. Namely:
1) the host was actually less of an issue than the repository of angerage. That might seem intuitive, but the host was "acting out" more. Therefore, my assumption was that hyperfocusing on the host was the optimal strategy. That might still be the general strategy, but the data indicates that focusing on the host disproportionately will not dramatically reduce the total amount of trauma.
2) we have been in therapy a great deal due to issues with our BFF. We thought we were silly for allocating so much of our resources and mental energy into normalizing things with them. However, column C3 reveals that BFF's impact on the trauma comprises 33.98% of all trauma. However, when adjusted for the extent to which we encounter them as a trigger, that number increases in column F3 to 42.41%! Our decision to focus on them is validated.
3) column AK contains our system's assessment as to the overall accuracy of many of the thoughts and beliefs that undergird our issues. The accuracy of each statement is assessed 1 - 10. I decided to divide this number by the total trauma score to get a number in column AL to get a ratio that allows me to prioritize my therapy work. I call it a Processing Resistance Quotient, but that is not all it is representing. It is basically a "bang for your buck" measurement where the difficulty of changing a belief is balanced by the amount of trauma the issue creates -- giving you an idea for where you can start your work to experience what will (hopefully) be quick relief. The idea here is that if the system is dealing with issues stemming from faulty assumptions, then individuals using modalities like REBT / CBT can get results more efficiently by focusing on the lower percentages.
4) some alters might really be NPCs as they had no connection whatsoever to any of the issues raised. The data indicates that problems in the system really are system problems. Perhaps this is simply a reflection of the choices in my model, but if it is legitimate, it shows me that at least in my system there are no problems where the issue is concentrated in one alter. This indicates the extent to which issues are shared even beyond trauma holders.
Here are some things that are not modeled that I might later add for more information:
  1. it does not assess the functionality of alters. It does not factor whether a trauma holder might be more able to hold trauma because that is there job, for example. I can think of ways that can be modeled -- but it would have to be modeled on a case-by-case basis.
  2. it does not factor in alters that front or seem to comprise a greater share of the total system. This model is made with the assumption of that all alters are created equal. If I had to make it again, I would add categories that assesses the emotional resilience of each alter. I know, for example, that the host is probably less resilient than anger or the system protector. I might add in a category that factors in the extent to which each alter is critical to the functionality of the system as a whole. When the functionality score is multiplied by the trauma score of the issue held by the alter and the trigger score, that might provide a more accurate measurement on the total functionality impact. If you think that this might be helpful, you are welcome to make a copy and simply replace the names, issues, and scores with your own. If you have any questions (I am still working to make this more user friendly), feel free to ask! There are lots of ballpark assumptions, and I would welcome any criticism or feedback. But I hope that even if the numbers give you a stroke, it at least allows you to think about ways to tackle trauma systemically.
submitted by FochingGreatStache to Advice [link] [comments]


2024.05.13 13:59 FochingGreatStache How viable is it to quantify traumas, triggers, toxic relationships, etc. based on this model?

The tl;dr is that I have dissociative identity disorder, and I have been trying to quantify the trauma held by most of my alters as otherwise we end of playing therapeutic "whack-a-mole." In an effort to attempt to address issues, I attempted to spreadsheet all of my issues. While I tried to optimize it for DID, I don't see any reason why it can't also be used for dealing with parts work or for single individuals? I would love feedback on how accurate or effective the model is, and suggestions for any changes. Admittedly, I am an odd bird who copes through intellectualizing and systematizing problems.
If I am being honest, I would say that 60-70% of the work here is therapy and/or mindfulness. However, I think the insights I have gotten from the data have been helpful. It also seems to me that this can be extremely useful to individuals who feel that therapy isn't "concrete" enough for them. I am not one of those individuals, but I know plenty who are. I would appreciate any and all feedback, and thank you all in advance for your time!
Hello! I am not sure if what you are about to see is going to be comprehensible to anyone but me. But, if it at least prompts people to consider the way we try to deal with trauma, then that will be sufficient for me. I would also love feedback about my work and would welcome any suggestions on things we need to model better. Here is the template that is available for use if you are interested:
Trauma Database Template
The tl;dr here is that I have created a spreadsheet that tries to do a few things: track all traumatic issues in the system.
  1. score each issue 1 - 5 on the extent to which the issue impacts each alter. (Sometimes, they can give the issue a score of up to 10 in select circumstances.)
  2. categorize each issue based on who or what it is connected to.
  3. assign each person that the trauma is connected to a trigger score. This is based on the idea that you can have extreme trauma that is scored at a 5, like physical abuse, but perhaps be insulated by the reminders of it on a daily basis. On the other hand, maybe you have more mundane trauma with your mom that scores a two, but you talk to her everyday. I multiplied the trauma score by the trigger score to give me a kind of composite number that operates almost like a threat index. That tells me where the fires are.
  4. provide an inventory for all issues held by all alters.
  5. determine numerically what the most significant triggers are for the system.
I started this because I was working through trauma that emerged with someone I have a current relationship with. Dealing with that trauma is hard enough. Doing so while having roughly two dozen alters sometimes feels next to impossible. It often feels like the process of trying to manage that is almost like a second job. This person has been trying extremely hard to change, and I know they never intended to hurt me. But the consequences of their actions have been devastating. However, I have been finding that whatever it is that I do appears to have limited value. I have journaled approximately 800 pages, and I have taken steps for the trauma-holding host to go on a sabbatical while others front for him while he processes the baggage. There has been no noticeable difference. We have tried to work through the baggage with the trauma-holding host, and it hasn't worked.
Normally in life, when I encounter a significant problem I try to "spreadsheet it." I know that for some people it is intimidating -- and I am first to admit that I do not make the most user friendly sheets. But they make sense in my head. In quantifying and itemizing trauma, I made some important conclusions that might be helpful in dealing with issues. Namely:
  1. the host was actually less of an issue than the repository of angerage. That might seem intuitive, but the host was "acting out" more. Therefore, my assumption was that hyperfocusing on the host was the optimal strategy. That might still be the general strategy, but the data indicates that focusing on the host disproportionately will not dramatically reduce the total amount of trauma.
  2. we have been in therapy a great deal due to issues with our BFF. We thought we were silly for allocating so much of our resources and mental energy into normalizing things with them. However, column C3 reveals that BFF's impact on the trauma comprises 33.98% of all trauma. However, when adjusted for the extent to which we encounter them as a trigger, that number increases in column F3 to 42.41%! Our decision to focus on them is validated.
  3. column AK contains our system's assessment as to the overall accuracy of many of the thoughts and beliefs that undergird our issues. The accuracy of each statement is assessed 1 - 10. I decided to divide this number by the total trauma score to get a number in column AL to get a ratio that allows me to prioritize my therapy work. I call it a Processing Resistance Quotient, but that is not all it is representing. It is basically a "bang for your buck" measurement where the difficulty of changing a belief is balanced by the amount of trauma the issue creates -- giving you an idea for where you can start your work to experience what will (hopefully) be quick relief. The idea here is that if the system is dealing with issues stemming from faulty assumptions, then individuals using modalities like REBT / CBT can get results more efficiently by focusing on the lower percentages.
  4. some alters might really be NPCs as they had no connection whatsoever to any of the issues raised. The data indicates that problems in the system really are system problems. Perhaps this is simply a reflection of the choices in my model, but if it is legitimate, it shows me that at least in my system there are no problems where the issue is concentrated in one alter. This indicates the extent to which issues are shared even beyond trauma holders.
Here are some things that are not modeled that I might later add for more information:
  1. it does not assess the functionality of alters. It does not factor whether a trauma holder might be more able to hold trauma because that is their job, for example. I can think of ways that can be modeled -- but it would have to be modeled on a case-by-case basis.
  2. it does not factor in alters that front or seem to comprise a greater share of the total system. This model is made with the assumption of that all alters are created equal.
If I had to make it again, I would add categories that assesses the emotional resilience of each alter. I know, for example, that the host is probably less resilient than anger or the system protector. I might add in a category that factors in the extent to which each alter is critical to the functionality of the system as a whole. When the functionality score is multiplied by the trauma score of the issue held by the alter and the trigger score, that might provide a more accurate measurement on the total functionality impact.
If you think that this might be helpful, you are welcome to make a copy and simply replace the names, issues, and scores with your own. If you have any questions (I am still working to make this more user friendly), feel free to ask! There are lots of ballpark assumptions, and I would welcome any criticism or feedback. But I hope that even if the numbers give you a stroke, it at least allows you to think about ways to tackle trauma systemically.
submitted by FochingGreatStache to askatherapist [link] [comments]


2024.05.13 12:59 SirQueWryyyTea What is wrong with my codes, i dont understand

This is my codes for creating an invoice from google spreadsheet where the invoice templates is in sheet "Inv" and the data is from "Data Bank Statement". // Global variables var wsData = SpreadsheetApp.getActiveSpreadsheet(); var url_base = wsData.getUrl().replace(/edit$/, ''); var inv = wsData.getSheetByName("Inv"); var invID = inv.getSheetId(); var parentFolderId = "1GKKsQQnNsPdGnwPrPtrKXcoJanS0YkHU"; function invoiceGenerator() { // Retrieve the Google Spreadsheet and its specific sheets var spreadsheet = SpreadsheetApp.getActiveSpreadsheet(); var invSheet = spreadsheet.getSheetByName("Inv"); // The sheet from which PDF will be generated var dataBankStatement = spreadsheet.getSheetByName("Data Bank Statement"); // The sheet containing data to iterate over // Get or create the PDFs folder in Drive var pdfFolder = getOrCreateFolder("1GKKsQQnNsPdGnwPrPtrKXcoJanS0YkHU"); // Iterate through the rows in the Data Bank Statement sheet var lastRow = dataBankStatement.getLastRow(); var fileNameColNo = dataBankStatement.getRange(1, 1, 1, dataBankStatement.getLastColumn()).getValues()[0].indexOf("File Name to Print") + 1; for (var i = 2; i <= lastRow; i++) { var fileName = dataBankStatement.getRange(i, fileNameColNo).getValue() "Invoice"; var currentDate = Utilities.formatDate(new Date(), Session.getScriptTimeZone(), "yyyyMMdd"); var fullName = fileName + ' ' + currentDate + '.pdf'; // Create the full file name // Generate and save the PDF savePDF(invSheet, fullName, pdfFolder); } } function savePDF(sheet, fullName, folder) { var blob = sheet.getBlob().getAs('application/pdf'); // Get the sheet as a PDF blob folder.createFile(blob).setName(fullName); // Create and name the PDF file in the folder } function getOrCreateFolder(folderId) { var folder; try { folder = DriveApp.getFolderById(folderId); } catch (e) { // If folder is not found, create a new one at the root of the Drive folder = DriveApp.getRootFolder().createFolder(folderId + " PDFs"); } return folder; } Whenever i try run this codes, TypeError: pdfFolder.createFile is not a function at savePDF(Code:68:13) at invoiceGenerator(InvoiceGenerator:27:5) this error keep popping out, why????? 
submitted by SirQueWryyyTea to GoogleAppsScript [link] [comments]


2024.05.13 10:34 Potential-Song9484 Light goes to floor instead of ceiling? it was okay on the lower level but no on the main level RCP?

Light goes to floor instead of ceiling? it was okay on the lower level but no on the main level RCP? submitted by Potential-Song9484 to bim [link] [comments]


2024.05.13 06:42 FochingGreatStache A Tool For Tracking Trauma Held By Alters

Hello! I am not sure if what you are about to see is going to be comprehensible to anyone but me. But, if it at least prompts people to consider the way we try to deal with trauma, then that will be sufficient for me. I would also love feedback about my work and would welcome any suggestions on things we need to model better. Here is the template that is available for use if you are interested:
Trauma Database Template
The tl;dr here is that I have created a spreadsheet that tries to do a few things:
  1. track all traumatic issues in the system.
  2. score each issue 1 - 5 on the extent to which the issue impacts each alter. (Sometimes, they can give the issue a score of up to 10 in select circumstances.)
  3. categorize each issue based on who or what it is connected to.
  4. assign each person that the trauma is connected to a trigger score. This is based on the idea that you can have extreme trauma that is scored at a 5, like physical abuse, but perhaps be insulated by the reminders of it on a daily basis. On the other hand, maybe you have more mundane trauma with your mom that scores a two, but you talk to her everyday. I multiplied the trauma score by the trigger score to give me a kind of composite number that operates almost like a threat index. That tells me where the fires are.
  5. provide an inventory for all issues held by all alters.
  6. determine numerically what the most significant triggers are.
I started this because I was working through trauma that emerged with someone I have a current relationship with. Dealing with that trauma is hard enough. Doing so while having roughly two dozen alters sometimes feels next to impossible. It often feels like the process of trying to manage that is almost like a second job. This person has been trying extremely hard to change, and I know they never intended to hurt me. But the consequences of their actions have been devastating. However, I have been finding that whatever it is that I do appears to have limited value. I have journaled approximately 800 pages, and I have taken steps for the trauma-holding host to go on a sabbatical while others front for him while he processes the baggage. There has been no noticeable difference. We have tried to work through the baggage with the trauma-holding host, and it hasn't worked.
Normally in life, when I encounter a significant problem I try to "spreadsheet it." I know that for some people it is intimidating -- and I am first to admit that I do not make the most user friendly sheets. But they make sense in my head. In quantifying and itemizing trauma, I made some important conclusions that might be helpful in dealing with issues. Namely:
  1. the host was actually less of an issue than the repository of angerage. That might seem intuitive, but the host was "acting out" more. Therefore, my assumption was that hyperfocusing on the host was the optimal strategy. That might still be the general strategy, but the data indicates that focusing on the host disproportionately will not dramatically reduce the total amount of trauma.
  2. we have been in therapy a great deal due to issues with our BFF. We thought we were silly for allocating so much of our resources and mental energy into normalizing things with them. However, column C3 reveals that BFF's impact on the trauma comprises 33.98% of all trauma. However, when adjusted for the extent to which we encounter them as a trigger, that number increases in column F3 to 42.41%! Our decision to focus on them is validated.
  3. column AK contains our system's assessment as to the overall accuracy of many of the thoughts and beliefs that undergird our issues. The accuracy of each statement is assessed 1 - 10. I decided to divide this number by the total trauma score to get a number in column AL to get a ratio that allows me to prioritize my therapy work. I call it a Processing Resistance Quotient, but that is not all it is representing. It is basically a "bang for your buck" measurement where the difficulty of changing a belief is balanced by the amount of trauma the issue creates -- giving you an idea for where you can start your work to experience what will (hopefully) be quick relief. The idea here is that if the system is dealing with issues stemming from faulty assumptions, then individuals using modalities like REBT / CBT can get results more efficiently by focusing on the lower percentages.
  4. some alters might really be NPCs as they had no connection whatsoever to any of the issues raised.
  5. The data indicates that problems in the system really are system problems. Perhaps this is simply a reflection of the choices in my model, but if it is legitimate, it shows me that at least in my system there are no problems where the issue is concentrated in one alter. This indicates the extent to which issues are shared even beyond trauma holders.
Here are some things that are not modeled that I might later add for more information. Firstly
  1. it does not assess the functionality of alters. It does not factor whether a trauma holder might be more able to hold trauma because that is their job, for example. I can think of ways that can be modeled -- but it would have to be modeled on a case-by-case basis.
  2. It does not factor in alters that front or seem to comprise a greater share of the total system. This model is made with the assumption of that all alters are created equal.
If I had to make it again, I would add categories that assesses the emotional resilience of each alter. I know, for example, that the host is probably less resilient than anger or the system protector. I might add in a category that factors in the extent to which each alter is critical to the functionality of the system as a whole. When the functionality score is multiplied by the trauma score of the issue held by the alter and the trigger score, that might provide a more accurate measurement on the total functionality impact.
If you think that this might be helpful, you are welcome to make a copy and simply replace the names, issues, and scores with your own. If you have any questions (I am still working to make this more user friendly), feel free to ask! There are lots of ballpark assumptions, and I would welcome any criticism or feedback. But I hope that even if the numbers give you a stroke, it at least allows you to think about ways to tackle trauma systemically.
submitted by FochingGreatStache to DID [link] [comments]


2024.05.13 05:47 cryptokaykay [D] Thoughts on DSPy

I have been tinkering with DSPy and thought I will share my 2 cents here for anyone who is planning to explore it:
The core idea behind DSPy are two things:
  1. ⁠Separate programming from prompting
  2. ⁠incorporate some of the best practice prompting techniques under the hood and expose it as a “signature”
Imagine working on a RAG. Today, the typical approach is to write some retrieval and pass the results to a language model for natural language generation. But, after the first pass, you realize it’s not perfect and you need to iterate and improve it. Typically, there are 2 levers to pull:
  1. ⁠Document Chunking, insertion and Retrieval strategy
  2. ⁠Language model settings and prompt engineering
Now, you try a few things, maybe document the performance in a google sheet, iterate and arrive at an ideal set of variables that gives max accuracy.
Now, let’s say after a month, model upgrades, and all of a sudden the accuracy of your RAG regresses. Again you are back to square one, cos you don’t know what to optimize now - retrieval or model? You see what the problem is with this approach? This is a very open ended, monolithic, brittle and unstructured way to optimize and build language model based applications.
This is precisely the problem DSPy is trying to solve. Whatever you can achieve with DSPy can be achieved with native prompt engineering and program composition techniques but it is purely dependent on the programmers skill. But DSPy provides native constructs which anyone can learn and use for trying different techniques in a systematic manner.
DSPy the concept:
Separate prompting from programming and signatures
DSPy does not do any magic with the language model. It just uses a bunch of prompt templates behind the scenes and exposes them as signatures. Ex: when you write a signature like ‘context, question -> answer’, DSPy adds a typical RAG prompt before it makes the call to the LLM. But DSPy also gives you nice features like module settings, assertion based backtracking and automatic prompt optimization.
Basically, you can do something like this with DSPy,
“Given a context and question, answer the following question. Make sure the answer is only “yes” or “no””. If the language model responds with anything else, traditionally we prompt engineer our way to fix it. In DSPy, you can assert the answer for “yes” or “no” and if the assertion fails, DSPy will backtrack automatically, update the prompt to say something like, “this is not a correct answer- {previous_answer} and always only respond with a “yes” or “no”” and makes another language model call which improves the LLMs response because of this newly optimized prompt. In addition, you can also incorporate things like multi hops in your retrieval where you can do something like “retrieve -> generate queries and then retrieve again using the generated queries” for n times and build up a larger context to answer the original question.
Obviously, this can also be done using usual prompt engineering and programming techniques, but the framework exposes native easy to use settings and constructs to do these things more naturally. DSPy as a concept really shines when you are composing a pipeline of language model calls where prompt engineering the entire pipeline or even module wise can lead to a brittle Pipeline.
DSPy the Framework:
Now coming to the framework which is built in python, I think the framework as it stands today is
  1. ⁠Not production ready
  2. ⁠Lacks clear documentation
  3. ⁠Poorly designed with not so clean interfaces and abstractions
To me it felt like a rushed implementation with little thought for design thinking, testing and programming principles. The framework code is very hard to understand with a lot of meta programming and data structure parsing and construction going behind the scenes that are scary to run in production.
This is a huge deterrent for anyone trying to learn and use this framework. But, I am sure the creators are thinking about all this and are working to reengineer the framework. There’s also a typescript implementation of this framework that is fairly less popular but has a much better and cleaner design and codebase:
https://github.com/dosco/llm-client/
My final thought about this framework is, it’s a promising concept, but it does not change anything about what we already know about LLMs. Also, hiding prompts as templates does not mean prompt engineering is going away, someone still needs to “engineer” the prompts the framework uses and imo the framework should expose these templates and give control back to the developers that way, the vision of separate programming and prompting co exists with giving control not only to program but also to prompt.
Finally, I was able to understand all this by running DSPy programs and visualizing the LLM calls and what prompts it’s adding using my open source tool - https://github.com/Scale3-Labs/langtrace . Do check it out and let me know if you have any feedback.
submitted by cryptokaykay to MachineLearning [link] [comments]


2024.05.13 05:44 cryptokaykay Thoughts on DSPy

I have been tinkering with DSPy and thought I will share my 2 cents here for anyone who is planning to explore it:
The core idea behind DSPy are two things:
  1. ⁠Separate programming from prompting
  2. ⁠incorporate some of the best practice prompting techniques under the hood and expose it as a “signature”
Imagine working on a RAG. Today, the typical approach is to write some retrieval and pass the results to a language model for natural language generation. But, after the first pass, you realize it’s not perfect and you need to iterate and improve it. Typically, there are 2 levers to pull:
  1. ⁠Document Chunking, insertion and Retrieval strategy
  2. ⁠Language model settings and prompt engineering
Now, you try a few things, maybe document the performance in a google sheet, iterate and arrive at an ideal set of variables that gives max accuracy.
Now, let’s say after a month, model upgrades, and all of a sudden the accuracy of your RAG regresses. Again you are back to square one, cos you don’t know what to optimize now - retrieval or model? You see what the problem is with this approach? This is a very open ended, monolithic, brittle and unstructured way to optimize and build language model based applications.
This is precisely the problem DSPy is trying to solve. Whatever you can achieve with DSPy can be achieved with native prompt engineering and program composition techniques but it is purely dependent on the programmers skill. But DSPy provides native constructs which anyone can learn and use for trying different techniques in a systematic manner.
DSPy the concept:
Separate prompting from programming and signatures
DSPy does not do any magic with the language model. It just uses a bunch of prompt templates behind the scenes and exposes them as signatures. Ex: when you write a signature like ‘context, question -> answer’, DSPy adds a typical RAG prompt before it makes the call to the LLM. But DSPy also gives you nice features like module settings, assertion based backtracking and automatic prompt optimization.
Basically, you can do something like this with DSPy,
“Given a context and question, answer the following question. Make sure the answer is only “yes” or “no””. If the language model responds with anything else, traditionally we prompt engineer our way to fix it. In DSPy, you can assert the answer for “yes” or “no” and if the assertion fails, DSPy will backtrack automatically, update the prompt to say something like, “this is not a correct answer- {previous_answer} and always only respond with a “yes” or “no”” and makes another language model call which improves the LLMs response because of this newly optimized prompt. In addition, you can also incorporate things like multi hops in your retrieval where you can do something like “retrieve -> generate queries and then retrieve again using the generated queries” for n times and build up a larger context to answer the original question.
Obviously, this can also be done using usual prompt engineering and programming techniques, but the framework exposes native easy to use settings and constructs to do these things more naturally. DSPy as a concept really shines when you are composing a pipeline of language model calls where prompt engineering the entire pipeline or even module wise can lead to a brittle Pipeline.
DSPy the Framework:
Now coming to the framework which is built in python, I think the framework as it stands today is
  1. ⁠Not production ready
  2. ⁠Lacks clear documentation
  3. ⁠Poorly designed with not so clean interfaces and abstractions
To me it felt like a rushed implementation with little thought for design thinking, testing and programming principles. The framework code is very hard to understand with a lot of meta programming and data structure parsing and construction going behind the scenes that are scary to run in production.
This is a huge deterrent for anyone trying to learn and use this framework. But, I am sure the creators are thinking about all this and are working to reengineer the framework. There’s also a typescript implementation of this framework that is fairly less popular but has a much better and cleaner design and codebase:
https://github.com/dosco/llm-client/
My final thought about this framework is, it’s a promising concept, but it does not change anything about what we already know about LLMs. Also, hiding prompts as templates does not mean prompt engineering is going away, someone still needs to “engineer” the prompts the framework uses and imo the framework should expose these templates and give control back to the developers that way, the vision of separate programming and prompting co exists with giving control not only to program but also to prompt.
Finally, I was able to understand all this by running DSPy programs and visualizing the LLM calls and what prompts it’s adding using my open source tool - https://github.com/Scale3-Labs/langtrace . Do check it out and let me know if you have any feedback.
submitted by cryptokaykay to deeplearning [link] [comments]


2024.05.13 04:07 Datjman034 Can't get Automated Loadout to post in correct format

Hey, I'm trying to post my ship build using the automated loadout but every time I post it just posts as the text. I made sure I'm using the "markdown editor" and copied from the "post template" tab of the excel sheet but still can't get it to work. Anyone know how to fix this?
Thanks
submitted by Datjman034 to stobuilds [link] [comments]


2024.05.13 01:31 52cr Can't clear HP MFP 4301fdw paper jam

I’m including the troubleshooting template I used last time I asked for help. Not sure if it’s still required here.
What would you like to troubleshoot?
Can’t clear paper jam from my HP MFP 4301fdw.
Printer Model: HP Color LaserJet Pro MFP 4301fdw
Error Message:
Paper Jam Paper is jammed inside the printer. Open the rear door and clear any jammed paper. Event code 13.10.14.
Ethernet Cable, WiFi or USB: Ethernet
Driver name and version:
HP Smart Universal Printing HP 3.8.1.2731 HP Color LaserJet Pro MFP 4301 [31A40C] Microsoft 10.0.19041.1 HP Color LaserJet Pro MFP 4301 [31A40C] Microsoft 10.0.19041.1
Firmware version: 6.12.1.12-202306030312 OS: Windows 10 version 22H2 build 19045.4291

of machines impacted: 1

of users impacted: 1

Original or 3rd party cartridges: Original
Any other details:
I successfully (duplex) printed one card on a sheet of blank Avery business cards (#5871), and I removed that card from the sheet. Attempting to print another (duplex) card, I fed the same sheet into my printer’s tray 1 (the multipurpose tray). Not too surprisingly, the printer jammed. Somewhat surprisingly (to me), it seems to be impossible to clear the jam.
The jam occurred just as the sheet was starting to exit through the slot above the output bin. I was able to grab and pull out small pieces of paper through that slot, but I believe most of the sheet is still inside the printer, possibly in fragments.
On the printer’s control panel, I thought I saw event code 13.10.04, but after power cycling a few times, I consistently see the following on the display:
Paper Jam Paper is jammed inside the printer. Open the rear door and clear any jammed paper. Event code 13.10.14.
I no longer see any paper peeking out of the exit slot above the output bin. I removed paper tray 2, extended the toner cartridge tray, and opened the user-accessible doors, but I don’t see any jammed paper anywhere. I suspect what’s left of my sheet is stuck near the exit slot. I don’t see any obvious way to get to that area from the back of the printer.
Do I need to hire a pro to unjam my printer, or is there something I’ve overlooked that I could try myself?

submitted by 52cr to printers [link] [comments]


2024.05.13 00:18 Ozmaister11 Dungeons and Dragons character sheet?

I had a pretty basic D&D character sheet from some template I found on the web back when using Notion. Have any of you tried making a D&D character sheet on Anytype?
submitted by Ozmaister11 to Anytype [link] [comments]


2024.05.12 23:00 v_dawg3 how to use the map chart, but with words?

how to use the map chart, but with words?
https://preview.redd.it/qyp49i3m920d1.png?width=1405&format=png&auto=webp&s=53caf74763f72676832d06bfc0bf5c651fe0c308
hey everyone, i'm trying to make a simple template that shows where i've traveled in the USA using google sheets. the problem i'm running into here is i want to use words instead of the the number values.
0 = yes
1 = want to visit
2 = lived there
how do i make it so the dropdowns let me display ex. "want to visit" instead of a "1"?
submitted by v_dawg3 to sheets [link] [comments]


2024.05.12 22:00 forthesect Subreddit suggestion and submission tracking.

This post contains a set of google docs tracking suggestions on resources/tools, a list of relevant subreddits, general ideas, and subreddit improvement suggestions, as well as one listing past book club submissions. If you have any additional suggestions or additions to any list other than book club submissions, comment below.
.............................................................................................................................................................................
Here is a list of tools, resources, or inspiring media.
Examples of tools, would be fully customizable character sheet templates, sites or apps to keep track of and organize world building information, and even sites like discord that allow you to set up a community for your project. Please comment below with any suggestions
Examples of resources would be, probability sheets, in depth articles on rpg design, or even a link to a resource and tool allocation page/thread like this one. Please comment below with any suggestions
Examples of inspiring media, podcasts or videos that talk about design or rpgs in general, cool rpgs you like, and even music that helps you when you are writing. Please comment below with any suggestions
https://docs.google.com/document/d/1fAwgfhHMvjH7oF6uA_k52LNh9oDeg7fuBhjdYNItomg/edit
Here is a list of rpg related subreddits (may eventually become tiered so that design and promotion based subreddits are separate from general rpg subreddits.) Please comment below with any suggestions
https://docs.google.com/document/d/1PIh4u0zFojz52lMV-HOKyEqHyBppQhy77DEQ6ZTBcLs/edit
Here is a list of submitted ideas to just throw out there or advice that doesn't merit a full post. Please comment below with any suggestions
https://docs.google.com/document/d/1XqHvCKd2WNTAxkifHHMBjCMjMX3Pc5yGwch4777uPWI/edit
Here is a catalogue of suggested post categories and improvements for the sub, as well as a list of improvements and policies I have instituted. Please comment below with any suggestions
https://docs.google.com/document/d/1IpadqJUgJsieRimkjbrKdnu3yVX1l9KNvh1xE6Ha4ec/edit
Here are links to each bookclub submission.
https://docs.google.com/document/d/1TyZdJ8JI4_b66fAvkzYlDYbNcm9v6pLcVL8_Sxtl0kw/edit
submitted by forthesect to myrpg [link] [comments]


http://activeproperty.pl/