2011.07.02 14:49 k_rock923 Managed Service Providers
2010.03.28 22:03 Reddit J-Pop
2015.02.20 03:34 Fungus_Schmungus Climate Change Science and News
2024.06.10 22:42 sunriseluxuryretreat Branson Missouri private villas
![]() | Welcome to the heart of the Ozarks, where natural beauty meets luxurious living. If you're planning a getaway to Branson, Missouri, and seeking the ultimate in comfort and privacy, look no further than private villas and upscale getaways. Whether you're envisioning a serene lakeside retreat or a stylish urban escape, Branson offers a diverse range of options to suit your preferences. In this blog, we'll delve into the allure of Branson's private villas and upscale accommodations, highlighting some of the top features and attractions that make this destination a must-visit. submitted by sunriseluxuryretreat to u/sunriseluxuryretreat [link] [comments] https://preview.redd.it/h72pb6z23t5d1.jpg?width=1280&format=pjpg&auto=webp&s=fdc014d570b5d28acc5090e5114ccce08badda9f Branson, Missouri Condos for Rent on Table Rock Lake Imagine waking up to the gentle lapping of waves and the soothing sounds of nature. Branson's condos for rent on Table Rock Lake offer exactly that and more. With stunning lake views, modern amenities, and convenient access to water-based activities like boating, fishing, and swimming, these condos provide an idyllic setting for a memorable vacation. Whether you're traveling with family, friends, or on a romantic getaway, these waterfront condos offer the perfect blend of relaxation and adventure. https://preview.redd.it/begobf443t5d1.jpg?width=944&format=pjpg&auto=webp&s=33e6cda14189a6cdbc989c5408ecb160395139ac Branson Missouri Vacation Rentals: Your Home Away from Home For those seeking a more spacious and secluded retreat, Branson's vacation rentals are an ideal choice. From cozy cabins nestled in the woods to expansive estates with private pools and hot tubs, these rentals cater to every comfort and preference. Imagine unwinding in a plush living room with a fireplace, cooking gourmet meals in a fully equipped kitchen, or soaking up the sun on a private deck overlooking the rolling hills of the Ozarks. With options ranging from charming cottages to luxurious estates, Branson's vacation rentals ensure that your stay is nothing short of spectacular. Branson Houses for Vacation Rental: Luxury Meets Convenience If you're looking to combine luxury with convenience, Branson's houses for vacation rental are the perfect solution. Featuring upscale amenities such as spacious layouts, designer furnishings, and state-of-the-art entertainment systems, these houses provide a premium experience for discerning travelers. Whether you're planning a family reunion, a corporate retreat, or a special celebration, these houses offer ample space and privacy for everyone to enjoy. Plus, with easy access to Branson's top attractions, dining, and entertainment options, you'll have everything you need for a memorable stay. Branson Landing Vacation Rentals: Urban Sophistication in the Heart of Branson For those who prefer a more urban vibe, Branson Landing vacation rentals offer a blend of sophistication and convenience. Located in the heart of downtown Branson, these rentals put you within steps of upscale shopping, dining, and entertainment options. From stylish lofts with city views to chic apartments with luxury amenities, Branson Landing vacation rentals provide a contemporary retreat amidst the excitement of the city. Explore the vibrant streets, catch a live show, or simply relax and enjoy the cosmopolitan atmosphere right outside your doorstep. Contact Us for Your Branson Luxury Retreat Ready to experience the best of Branson, Missouri? Contact Sunrise Luxury Retreat for your private villa or upscale getaway. With a range of options to choose from and personalized concierge services, we ensure that your stay exceeds expectations. Visit our website at Sunrise Luxury Retreat to explore our properties and amenities. You can also reach out to us via email at [angela@sunriseluxuryretreat.com]() or give us a call at +1 (417) 576-4085. Your dream vacation in Branson awaits! Conclusion: Discover the Beauty of Branson in Style Branson, Missouri, offers a captivating blend of natural beauty, cultural richness, and luxurious accommodations. Whether you're drawn to the tranquility of Table Rock Lake or the excitement of downtown Branson, private villas and upscale getaways provide the perfect base for your exploration. With a variety of vacation rentals to choose from, personalized services, and a host of activities and attractions, Branson promises an unforgettable experience for travelers seeking a blend of relaxation and adventure. Plan your getaway today and discover why Branson is a top destination for luxury and leisure. |
2024.06.10 22:41 Darnitol1 First NCL Cruise - Am I asking too much here?
2024.06.10 22:36 dmagee33 [ALL STATES] Anyone received a "Federal Benefits Accuracy Measurement Program" Audit? Is this legit?
2024.06.10 22:36 SimilarNerve731 LinkedIn does a really good job vetting legit job postings...
![]() | submitted by SimilarNerve731 to recruitinghell [link] [comments] |
2024.06.10 22:36 uhmare A rant about Ancestry stonewalling me for a week.
2024.06.10 22:34 VolumeSimilar7983 MCOL guidance for unpaid domestic bills
2024.06.10 22:32 skiadventure For sale: 2 front row box tickets for Beethoven's 9th symphony at NAC (Thursday, June 20, 2024, 8 p.m.)
2024.06.10 22:28 nire0026 Is this the latest scam attempt?
![]() | I received two of these from two different email addresses in two weeks. I went through the ‘disconnect email’ process. submitted by nire0026 to Scams [link] [comments] |
2024.06.10 22:25 Kelsbells1022 I’m grateful I get to continue to teach at an awesome school
2024.06.10 22:25 MrNoGains Account recovery setup loop
2024.06.10 22:24 ADH33RA Got in for nights & weekends S5 !
![]() | Just received an email. The website was down, but I kept trying to log in repeatedly to check if I was selected, and yep, I was! How about everyone else? Super excited for N&W Season 5! submitted by ADH33RA to buildspace_ [link] [comments] |
2024.06.10 22:16 AdRegular4196 Served simple procedure for boiler replacement after house sale - Scotland
2024.06.10 22:11 edengonedark Money stolen. NDAs keep me quiet. No victory or justice. Sometimes it kills me.
2024.06.10 22:09 ansi09 All You Need To Know About Solana V1.18 Update
![]() | Source: https://www.helius.dev/blog/all-you-need-to-know-about-solanas-v1-18-update submitted by ansi09 to solana [link] [comments] All You Need to Know About Solana's v1.18 UpdateA big thank you to Rex St. John and Mike MacCana for reviewing this article.IntroductionThe super-majority adoption of Solana’s 1.18 update is a significant milestone. It ushers in a host of improvements and new features aimed at enhancing the network’s performance, reliability, and efficiency. One of the most notable changes is the introduction of a central scheduler. This new scheduler aims to streamline transaction handling and ensure more accurate and efficient priority calculations. Other improvements to the runtime environment and program deployments, for example, help provide more reliable performance even during times of peak network load.This article explores the updates and improvements brought by the 1.18 release. We’ll explore the motivations behind these chances, the specifics of these new features, and their expected impact on improving the network. Whether you’re a validator operator, a developer, or the average Solana user, this comprehensive overview of the 1.18 update will provide you with the necessary information to understand and leverage the benefits of these new improvements. We must first discuss Anza, a newly established development firm driving these changes, and its role in the ongoing development of Solana. What’s Anza?Anza is a newly established software development firm created by former executives and core engineers from Solana Labs. Its creation represents a strategic move to bolster Solana’s ecosystem, aiming to improve its reliability, decentralization, and network strength. Anza was founded to enhance Solana’s ecosystem by developing critical infrastructure, contributing to key protocols, and fostering the innovation of new tools.The founding team includes Jeff Washington, Stephen Akridge, Jed Halfon, Amber Christiansen, Pankaj Garg, Jon Cinque, and several core engineers from Solana Labs. Anza is focused on developing and refining Solana’s validator clients with the creation of Agave — a fork of the Solana Labs validator client. Anaza’s ambitions extend beyond the development of their validator client and are committed to ecosystem-wide improvements. This includes the development of Token Extensions and a customized Rust / Clang toolchain. By fostering a collaborative and open approach to development, Anza is dedicated to accelerating and improving the Solana ecosystem. What’s Agave?As mentioned briefly in the previous section, Agave is a fork of the Solana Labs validator client spearheaded by Anza. In this context, the term “fork” refers to Anza’s development team taking the existing code from the Solana Labs repository and starting a new development path separate from the original codebase. This allows Anza to implement its own improvements, features, and optimizations to the Solana Labs client.The Migration ProcessThe migration of the client to Anza’s GitHub organization started on March 1st. Initially, Agave will mirror the Solana Labs repository to give the community time to adjust. During this period, Anza will handle closing pull requests (PRs) and migrating relevant issues to Agave’s repository. Agave and the Solana Labs client versions 1.17 and 1.18 will be identical in terms of functionality. Anza aims to release Agave v2.0 this summer, which includes archiving the Solana Labs client and recommending that 100% of the network migrate over to the new Agave client.The Solana Labs to Agave migration process is publicly tracked on their GitHub. The Agave RuntimeThe Agave Runtime inherits its foundational architecture from the Solana Virtual Machine (SVM) and is the backbone for executing the core functionalities defined by the Sealevel runtime.The Solana protocol delineates the runtime as a critical component for processing transactions and updating the state within the accounts database. This specification has been adopted and further refined by the Agave and Firedancer clients. The essence of the SVM is its capability to execute all Solana programs and modify account states in parallel. The concept of a bank is key to processing transactions and understanding the changes coming in 1.18. A bank is both a piece of logic and a representation of the ledger state at a specific point in time. It acts as a sophisticated controller managing the accounts database, overseeing tracking client accounts, managing program execution, and maintaining the integrity and progression of Solana’s ledger. A bank encapsulates the state resulting from the transactions included in a given block, serving as a snapshot of the ledger at that point in time. Each bank is equipped with caches and references necessary for transaction execution, allowing them to be initialized from a previous snapshot or the genesis block. During the Banking Stage, where the validator processes transactions, banks are used to assemble blocks and later verify their integrity. This lifecycle includes loading accounts, processing transactions, freezing the bank to finalize state, and eventually making it rooted, ensuring its permanence. As a general overview, the transaction processing engine within the Agave Runtime is tasked with loading, compiling, and executing programs. It uses Just-In-Time (JIT) compilation, caching compiled programs to optimize execution efficiency and reduce unnecessary recompiling. Programs are compiled to eBPF format before deployment. The runtime then uses the rBPF toolkit to create an eBPF virtual machine, which performs JIT compilation from eBPF to x86_64 machine code instructions, taking full advantage of the available hardware. This ensures the programs are executed efficiently. The 1.18 update introduces a central transaction scheduler, which is deeply intertwined with the operational efficiencies introduced by the Agave Runtime. By improving how transactions are compiled, executed, and managed via banks, the 1.18 update enables a more streamlined and efficient scheduling process. In turn, this leads to faster transaction processing times and enhanced throughput. The new Agave Runtime and its client serve as the bedrock for these enhancements, so it’s crucial that we have some sort of general understanding before we dive into the intricacies of the new scheduler. If you want to learn more about the Agave Runtime, I recommend reading Joe Caulfield's article on the topic. It goes into considerable detail and provides helpful code snippets throughout. A More Efficient Transaction SchedulerThe Current Implementationhttps://preview.redd.it/31sn3yitws5d1.png?width=3840&format=png&auto=webp&s=a62efec97134d30b02dafd820a40be835f1c4a43Source: Adapted from Andrew Fitzgerald’s article Solana Banking Stage and Scheduler In the transaction processing pipeline, packets of transactions first enter the system through packet ingress. These packets then undergo signature verification during the SigVerify stage. This step ensures each transaction is valid and authorized by the sender. https://preview.redd.it/zps9v7ovws5d1.png?width=3840&format=png&auto=webp&s=0f866159cbf4a32f26d3d050aec24a7ad8b92e5e Following signature verification, transactions are sent to the Banking Stage. The Banking Stage has six threads—two dedicated to processing vote transactions from either the Transaction Processing Unit (TPU) or Gossip and four focused on non-vote transactions. Each thread is independent of one another and receives packets from a shared channel. That is, SigVerify will send over packets in packet batches, and each thread will pull transactions from that shared channel and store them in a local buffer. The local buffer receives the transactions, determines their priority, and sorts them accordingly. This queue is dynamic, constantly updating to reflect real-time changes in transaction status and network demands. As transactions are added to the queue, their order is reassessed to ensure the highest-priority transactions are ready to be processed first. This process happens continuously, and what happens to these packets of transactions depends the validator’s position in the leader schedule. If the validator is not scheduled to be the leader in the near future, they will forward packets to the upcoming leader and drop them. As the validator gets closer to its scheduled leadership slot (~20 slots away), it will continue forwarding packets but will no longer drop them. This is done to ensure these packets can be included in one of their own blocks if the other leaders don’t process them. When a validator is 2 slots away from becoming the leader, it starts holding packets — accepting them and doing nothing so they can be processed when the validator becomes leader. During block production, each thread takes the top 128 transactions from their local queue, attempts to grab locks, and then checks, loads, executes, records, and commits the transactions. If the lock grab fails, the transaction is retried later. Let’s expand upon each step:
The Banking Stage uses a multi-iterator approach to create these batches of transactions. A multi-iterator is a programming pattern that allows simultaneous traversal over a dataset in multiple sequences. Think of it as having several readers going through a single book, each starting at different chapters, coordinating to ensure they don’t read the same page at the same time if their understanding of the content might interfere with one another. In the Banking Stage, these “readers” are iterators, and the “book” is the collection of transactions waiting to be processed. The goal of the multi-iterator is to efficiently sift through transactions, grouping them into batches that can be processed without any lock conflicts. Initially, the transactions are serialized into a vector based on priority. This gives the multi-iterator a structured sequence to segment these transactions into non-conflicting batches. The multi-iterator begins at the start of the serialized vector, placing iterators at junctures where transactions don’t conflict with one another. In doing so, it creates batches of 128 transactions without any read-write or write-write conflicts. If a transaction conflicts with the currently forming batch, it’s skipped and left unmarked, allowing it to be included in a subsequent batch where the conflict no longer exists. This iterative process adjusts dynamically as transactions continue to be processed. After successfully forming a batch, the transactions are executed, and if successful, they are recorded in the Proof of History Service and broadcast to the network. The Problems with the Current ImplementationThe current implementation has several areas where performance can be adversely affected, leading to potential bottlenecks in transaction processing and inconsistent prioritization. These challenges primarily stem from the architecture of the Banking Stage and the nature of transaction handling within the system.A fundamental issue is that the four independent threads processing non-vote transactions have their own view of transaction priority within their own threads. This discrepancy can cause jitter or inconsistency in transaction ordering. These discrepancies become more pronounced when all high-priority transactions conflict. Since packets are essentially pulled randomly by each thread from the shared channel from SigVerify, each thread will have a random set of all the transactions. During competitive events, such as a popular NFT mint, it is likely that many high-priority transactions will be in multiple Banking Stage threads. This is problematic because it can cause inter-thread locking conflicts. The threads, working with different sets of priorities, may race against each other to process these high-priority transactions, inadvertently leading to wasted processing time due to unsuccessful lock attempts. Think of the Banking Stage as an orchestra where each thread is a different section — strings, brass, woodwinds, and percussion. Ideally, a conductor would coordinate these sections to ensure a harmonious performance. However, the current system resembles an orchestra trying to perform a complex piece without a conductor. Each section plays its own tune, regularly clashing with one another. High-priority transactions are the solo parts all sections attempt to play simultaneously, causing confusion. This lack of coordination highlights the need for a centralized “conductor” to ensure efficiency and harmony in Solana’s transaction processing, much like a conductor leading an orchestra. The New Transaction SchedulerThe Banking Stage uses a multi-iterator approach to create these batches of transactions. A multi-iterator is a programming pattern that allows simultaneous traversal over a dataset in multiple sequences. Think of it as having several readers going through a single book, each starting at different chapters, coordinating to ensure they don’t read the same page at the same time if their understanding of the content might interfere with one another. In the Banking Stage, these “readers” are iterators, and the “book” is the collection of transactions waiting to be processed. The goal of the multi-iterator is to efficiently sift through transactions, grouping them into batches that can be processed without any lock conflicts.Initially, the transactions are serialized into a vector based on priority. This gives the multi-iterator a structured sequence to segment these transactions into non-conflicting batches. The multi-iterator begins at the start of the serialized vector, placing iterators at junctures where transactions don’t conflict with one another. In doing so, it creates batches of 128 transactions without any read-write or write-write conflicts. If a transaction conflicts with the currently forming batch, it’s skipped and left unmarked, allowing it to be included in a subsequent batch where the conflict no longer exists. This iterative process adjusts dynamically as transactions continue to be processed. After successfully forming a batch, the transactions are executed, and if successful, they are recorded in the Proof of History Service and broadcast to the network. The Problems with the Current ImplementationThe current implementation has several areas where performance can be adversely affected, leading to potential bottlenecks in transaction processing and inconsistent prioritization. These challenges primarily stem from the architecture of the Banking Stage and the nature of transaction handling within the system.A fundamental issue is that the four independent threads processing non-vote transactions have their own view of transaction priority within their own threads. This discrepancy can cause jitter or inconsistency in transaction ordering. These discrepancies become more pronounced when all high-priority transactions conflict. Since packets are essentially pulled randomly by each thread from the shared channel from SigVerify, each thread will have a random set of all the transactions. During competitive events, such as a popular NFT mint, it is likely that many high-priority transactions will be in multiple Banking Stage threads. This is problematic because it can cause inter-thread locking conflicts. The threads, working with different sets of priorities, may race against each other to process these high-priority transactions, inadvertently leading to wasted processing time due to unsuccessful lock attempts. Think of the Banking Stage as an orchestra where each thread is a different section — strings, brass, woodwinds, and percussion. Ideally, a conductor would coordinate these sections to ensure a harmonious performance. However, the current system resembles an orchestra trying to perform a complex piece without a conductor. Each section plays its own tune, regularly clashing with one another. High-priority transactions are the solo parts all sections attempt to play simultaneously, causing confusion. This lack of coordination highlights the need for a centralized “conductor” to ensure efficiency and harmony in Solana’s transaction processing, much like a conductor leading an orchestra. The New Transaction Schedulerhttps://preview.redd.it/zirp1j60xs5d1.png?width=3840&format=png&auto=webp&s=0d75e3721a78a2d836914eafd27010919a6e5e71The 1.18 update introduces a central scheduling thread, which replaces the previous model of having four independent banking threads, each managing its own transaction prioritization and processing. In this revised structure, the central scheduler is the sole recipient of transactions from the SigVerify stage. It builds a priority queue and deploys a dependency graph to manage transaction prioritization and processing. This is a transaction dependency graph. The arrows mean ‘is depended on.’ For example, Transaction A is depended on by Transaction B, and Transaction B is depended on by both Transaction C and Transaction D This dependency graph is known as a prio-graph. It is a directed acyclic graph that is lazily evaluated as new transactions are added. Transactions are inserted into the graph to create chains of execution and are then popped in time-priority order. When dealing with conflicting transactions, the first to be inserted will always have higher priority. In the example above, we have transactions A through H. Note that transactions A and E have the highest priority within their prospective chains and do not conflict. The scheduler moves from left to right, processing the transactions in batches: https://preview.redd.it/mc8o80n4xs5d1.png?width=3840&format=png&auto=webp&s=3d882c3d54e7f989b05b424d4cac6be488182591 Transactions A and E are processed as the first batch; then B and F; and then C, D, G; and H as the final batch. As you can see, the highest-priority transactions are at the top of the graph (i.e., to the far left). As the scheduler examines transactions in descending order, it identifies conflicts. If a transaction conflicts with a higher priority one, an edge is created in the graph to represent this dependency (e.g., C and D conflict with B). The new scheduler model addresses several key issues inherent to the multi-iterator approach:
Note the central scheduler is not enabled by default and must be enabled using the new --block-production-method central-scheduler flag when starting a validator. It is currently opt-in only but will become the default scheduler in future releases. Also, note that the old scheduler can be enabled using the --block-production-method thread-local-multi-iterator flag (this is enabled by default, but please don’t do this in future releases — the central scheduler is much more efficient and addresses the issues presented by the old scheduler). More Effective Priority Calculation1.18 also refines how transaction priority is determined, making the process more equitable and efficient regarding resource usage and cost recovery. Previously, transaction prioritization was primarily based on compute budget priority, sometimes leading to suboptimal compute unit pricing. This was because the prioritization did not adequately consider the base fees collected, leading to situations where resources could be underpriced and affect the network’s operational efficiency.The new approach adjusts the transaction priority calculation to consider the transaction fees and the associated costs using the formula Priority = Fees / (Cost + 1). Here, the fees represent the transaction fees associated with a given transaction, while the cost represents the compute and resource consumption determined by Solana’s cost model. Adding “1” in the denominator is a safety measure to prevent division by zero. We can breakdown the formula further to make Fees and Cost more explicit: The cost of a transaction is now calculated comprehensively, considering all associated compute and operational costs. This ensures that priority calculations reflect the true resource consumption of a transaction. This means developers and users will receive higher priority if they request fewer compute units. This also means that simple transfers, without any priority fees, will have some priority in the queue. Improved Program Deployment1.18 also significantly improves program deployments with respect to deployment reliability and execution efficiency.The new update addresses an issue where programs deployed in the last slot of an epoch did not correctly apply the runtime environment changes planned for the subsequent epoch. Thus, a program deployed during this transition period would erroneously use the old runtime environment. 1.18 adjusts the deployment process to ensure that the runtime environment for any program deployed at the end of an epoch is aligned with the environment of the upcoming epoch. 1.18 also addresses the inability to set a compute unit price or limit on deploy transactions by adding the --with-compute-unit-price flag to the CLI program deploy commands. This flag can be used with the solana program deploy and solana program write-buffer commands. The compute unit limit is set by simulating each type of deploy transaction and setting it to the number of compute units consumed. Another important improvement involves how blockhashes for large program deployments are handled. Before 1.18, transactions sent with sign_all_messages_and_send were throttled to 100 TPS. For larger programs, the number of deploy transactions will be in the thousands. This means that transactions could be delayed and risk using expired blockhashes as many of these transactions will be delayed for more than 10 seconds at a time. 1.18 delays signing deploy transactions with a recent blockhash until after the throttling delay. Blockhashes now refresh every 5 seconds, so deployments with over 500 transactions will benefit by using a more recent blockhash. Additionally, 1.18 introduces improvements to how the network handles program deployments and verifies transactions. Previously, some programs were incorrectly marked as FailedVerification due to errors in identifying account statuses. This could mislabel programs that hadn’t actually failed any checks. These programs are now correctly identified as Closed if they’re not supposed to be active. This change ensures that only problematic programs are flagged for rechecking and helps prevent unnecessary re-verifications. The process for updating program states has also been refined. Programs can now transition from a Closed state to an active state within the same time slot they are deployed. This means that programs become operational faster and more reliably, which is crucial during times of high demand. These adjustments help manage network load more effectively, preventing the types of congestion that can slow down transaction processing for everyone. “The Congestion Patch” — Handling Congestion BetterTestnet version 1.18.11, heralded as “The Congestion Patch,” proposed changes to address Solana’s recent congestion. Note that this release isn’t specific to 1.18, and it has been backported to 1.17.31. Regardless, it’s crucial that we talk about it.The big change is that QUIC now treats super low staked peers as unstaked peers in Stake-Weighted Quality of Service (SWQoS). This was to address the fact that staked nodes with a very small amount of stake could abuse the system to get disproportional bandwidth. Also, the current metrics could not tell what the proportions of packets sent down and throttled from staked versus non-staked nodes are. So, these metrics were added for greater visibility. How packet chunks were handled was also optimized by replacing instances of vec with smallvec to save an allocation per packet. This is possible as streams are packet-sized, so few are expected. In the Banking Stage, previously, all packets were forwarded to the next node. However, 1.18 changes this so that only packets from staked nodes are forwarded. This update effectively makes staked connections more important than ever going forward, as they carry more weight in calculating priority and forwarding transactions. Improved DocumentationThe 1.18 update also significantly improves the translation support for the official Solana documentation to ensure greater accessibility for a global audience. Updates include upgrading the Crowdin CLI and configuration (which streamlines the synchronization of documents across languages) and introducing a new serve command for better local testing via Docusaurus. The documentation also improves how static content is handled by linking PDF files directly to GitHub blobs to avoid issues with relative paths in translated builds.For developers, the process of contributing to translations is clarified with an updated README on handling common issues such as necessary environment variables and typical build errors. This is complemented by improvements in the continuous integration flow, which now includes translations only in stable channel builds. This ensures that only vetted and stable documentation reaches end-users. These changes aim to simplify contributions, enhance the official documentation's quality, and give all users access to reliable and accurate information. ConclusionDriven by Anza, the 1.18 update substantially improves transaction handling, priority calculations, program deployments, official documentation, and overall network performance. With the introduction of a central scheduler and the various fixes aimed at addressing recent congestion, Solana is better equipped to handle peak loads and ensure efficient and reliable network behavior. Solana is the best chance of a scalable blockchain, and this update affirms its potential.If you’ve read this far, thank you, anon! Be sure to enter your email address below so you’ll never miss an update about what’s new on Solana. Ready to dive deeper? Explore the latest articles on the Helius blog and continue your Solana journey, today. Additional Resources |
2024.06.10 22:00 Swimming-Mention-829 Smartlead.ai and B2B emails
2024.06.10 21:55 prkskier Wells Fargo Attune Recon Approval DP
2024.06.10 21:53 rmartin_tt What is Value Based Selling?
![]() | When people think about sales, they often recall their experiences as consumers or iconic portrayals of salespeople they’ve seen in television and movies like Glengarry Glen Ross, Mad Men, or The Office, to name a few. These experiences and cultural references typically evoke images of straightforward transactions and direct marketing techniques used in consumer purchases. submitted by rmartin_tt to TakeTurns [link] [comments] However, in the realm of B2B sales, the landscape is quite different. Unlike the straightforward transactions depicted in popular media, organizations participate in what academics have called: considered purchases, deliberate buying, or high-involvement buying decisions. This kind of buying is a much more strategic process characterized by thorough research, evaluation, and detailed planning. Value based selling is a value delivery approach that aligns an organization’s presales, sales, and customer success teams with the way their customers research, evaluate, purchase, and consume solutions. For customer-facing teams, a critical success factor in VBS is understanding and demonstrating how a product or service meets a customer’s unique business needs and objectives. In this article, we'll explore the fundamentals of VBS and investigate how underlooked technologies, such as external collaboration tools, can empower sales teams to achieve greater success with value based selling. What is Value Based Selling (VBS)?Value based selling, or VBS, is an approach that focuses on understanding and reinforcing how the product or service meets a customer's requirements and creates value for the customer. This approach differs from traditional selling techniques that might focus primarily on the features or specifications of the product. Here are some key aspects of VBS:
When to Use Value Based Selling?Not every sales situation calls for value based selling. In fact, VBS is most effective in several key scenarios:
What Does the Value Based Selling Methodology Look Like?Sales methodologies and CRM solutions prescribe very detailed, complex processes for sales teams. Teams can get so bogged down in the mechanics that they lose sight of the overall objective. To address this, rather than proposing four-, seven-, or ten-step sales processes, here’s our view of the entire selling process mapped to the traditional customer journey or customer activity lifecycle (awareness, consideration, purchase, retention).Assuming that the customer (or prospect) is already aware of your solution–which is why they are engaging with your sales team in the first place!- VBS aligns neatly with the remaining steps in the customer lifecycle: Consideration, Purchase, and Retention. We elected for this alignment because it helps keep the customer front and center. https://preview.redd.it/evw1wfcnts5d1.jpg?width=1200&format=pjpg&auto=webp&s=0f1b01df240982ac51fcce04b92f2c92709f0171 Here’s how the process might unfold across each of the three main phases:1. Presales Phase: Engaging and UnderstandingIn this phase, which corresponds with the consideration stage of the customer lifecycle, the sales team's primary focus is to engage with the customer to gain a deep understanding of their business environment, challenges, and specific needs. This involves initial contact, where the team establishes rapport and trust, followed by a detailed discovery process. The goal here is to gather enough information to develop a solution that precisely addresses the customer's requirements. Key activities include preparing needs assessment reports and drafting preliminary solution overviews. Risk management at this stage involves ensuring the right customer fit and maintaining a continuous feedback loop to align closely with the customer's objectives. Objective: To understand the customer's specific needs and challenges and develop a tailored solution that aligns with these requirements. Key Activities:
Aligned with the Purchase stage, the sales team now shifts to formalizing the customer’s needs into a comprehensive proposal. This phase involves a collaborative effort to develop a proposal that details the tailored solution, its benefits, and the expected value. The team then engages in discussions with the customer, inviting feedback to refine the offering. Negotiation skills come into play as the team works towards finalizing the terms of the agreement. The creation of formal proposal documents and the preparation of draft contracts or agreements are crucial. Managing risks in this phase includes clear communication of terms and employing flexible negotiation strategies to ensure a successful agreement. Objective: To formalize the customer's requirements into a concrete proposal and reach an agreement that meets both parties' needs. Key Activities:
Once the sale is made, the focus for the sales team in the Post-Sales Phase, corresponding to the Retention stage, is on ensuring a smooth implementation and fostering a long-term relationship. In most organizations, the customer relationship transfers from sales to customer success, professional services, or account management. This new team oversees the rollout of the solution, ensuring it meets the agreed specifications and customer expectations. Concurrently, they provide training and support to facilitate effective use of the solution. Regular performance monitoring and maintaining open communication lines are essential to nurture the ongoing relationship and identify additional opportunities for growth. The team prepares detailed implementation plans, training materials, and performance reports. Risk management in this phase involves closely monitoring the implementation process, conducting regular performance reviews, and building a strong, ongoing relationship with the customer. It’s worth noting that success in this phase reduces churn and can help transform customers into advocates or references that help support the sales teams in presales. Objective: To effectively implement the solution, ensure customer satisfaction, and foster a long-term relationship for ongoing business growth. Key Activities:
Gaps in the Sales Technology StackAs we can see from the methodology above, successful value based selling depends on the depth and quality of customer interactions in every phase. In each phase, we’ve listed documents that the team would jointly author with their customer.Those mutually authored documents are pivotal in establishing and reinforcing the shared understanding between the sales team and the customer, ensuring that the proposed solutions are in perfect alignment with the customer's requirements. That agreement is what the proposals and contracts are built upon. Those documents enable the customer to sell and advocate internally. That information is used by the post-sales teams to ensure delivery meets the mark. It’s hard to overstate the importance of that shared understanding found in these documents. Without it, sales pursuits are unlikely to be successful, and even if one manages to close, the customers are unlikely to be satisfied or referencable. It’s hard to overstate the importance of that shared understanding found in these documents. Without it, sales pursuits are unlikely to be successful, and even if one manages to close, the customers are unlikely to be satisfied or referencable.However, when we look at the typical “Sales Tech Stack” teams utilize, we don’t see much that supports teams as they pursue better customer alignment. Consider the top ten technologies used by teams today:
Improve your sales proposal efficiency using TakeTurns Why Value Based Selling Teams Benefit From External Collaboration ToolsToday most teams are unaware of the gap in their sales technology (salestech) stack. In fact, collaboration with their customers and prospects on those all-important VBS documents is largely done in email. The limitations of email in terms of version control, tracking complex conversation threads, and efficiently managing document collaboration are well-known. Those limitations lead to miscommunications and inefficiencies, adversely affecting both the sales process, the customer experience, and customer engagement.That’s why it’s recommended that teams looking for success with value based selling techniques examine external collaboration tools. These tools are specifically tailored to meet the collaborative and customer-centric demands of value based selling. Successfully implemented, these tools can not just streamline the sales team's workflow but also enhance customer engagement and experience, and improve close rates. Here's a rundown of what an ideal external collaboration toolset should include: 1. Common Workspace for Customer Collaboration: This unified platform consolidates all activities, interactions, and documents related to a customer. It's an execution-focused workspace that supports document collaboration with customers. The workspace allows both team members and customers to access, revise, and discuss various documents in a shared environment. 2. Asynchronous Collaboration Capabilities: Considering that customers and your VBS team are not always in the same geography, and certainly not under the same governance structure, the external collaboration tool must support asynchronous collaboration. This functionality enables team members and customers to contribute and provide feedback at their convenience while keeping everyone informed of the latest updates and developments. 3. Versioning, Transparency, and Accountability: A key feature of the workspace is robust version control for all documents. This allows for tracking changes, updates, or revisions, providing transparency by showing who made specific changes and when. It ensures accountability among all parties and facilitates a clear understanding of each document's evolution, which is crucial for maintaining precision and clarity in complex sales processes. 4. Integrated Communication Tools: Beyond document collaboration, the workspace should integrate asynchronous and real-time communication tools for comments, questions, or discussions. One critical requirement is to ensure that all discussions are recorded and accessible for all participants, this helps keep everyone on the same page. 5. Security and Data Protection: With sensitive customer information often involved, the platform must have strong security measures in place. This includes data encryption, secure access controls, and compliance with data protection regulations, ensuring the confidentiality and integrity of all information. 6. Notifications and Automated Reminders: Automated notifications and reminders about document updates, deadlines, or required actions can keep the collaboration process efficient and on schedule. 7. Integration with CRM and Other Sales Tools: Ideally, thetool will have some method to tie the collaboration with the existing CRM record and other sales tools. This ensures that document-centric processes are tracked as part of the broader sales processes and customer data. Adopting these advanced, integrated collaboration platforms represents a strategic shift from traditional email communication. This move is pivotal not only for improving internal efficiency but also for elevating the overall customer journey, which is indispensable for the success of the value based selling team. Final thoughts In summary, VBS stands out as a vital strategy in the B2B sales landscape, particularly due to its focus on understanding and delivering specific customer value. The effectiveness of this approach is deeply rooted in the quality of interactions and the depth of collaboration between the sales team and the customer. This necessitates a shift from traditional communication methods to more sophisticated external collaboration tools. These tools not only streamline the sales process but also enhance the overall customer experience, ensuring that solutions are not just sold but are perfectly aligned with the customer's needs and expectations. Embracing these tools is not just a step towards improving sales efficiency; it is a commitment to elevating the entire customer journey and achieving long-term success in Value Based Selling. What is Value Based Selling? was perviously published on our website TakeTurns |
2024.06.10 21:53 edengonedark Money stolen. NDAs keep me quiet. No victory or justice. Sometimes it kills me.
2024.06.10 21:52 Leather-Cupcake4874 A new era is starting. Apart from Jss noida where top rankers of jee main go through aktu, now jss group has started JSS UNIVERSITY.
![]() | submitted by Leather-Cupcake4874 to AKTU [link] [comments] |
2024.06.10 21:51 TechExpert2910 Apple Intelligence is out - farewell, Rabbit R1 & Humane AI Pin
![]() | submitted by TechExpert2910 to artificial [link] [comments] |
2024.06.10 21:50 Senior_Design3778 Is this real or a scam? Dashingreviews.com
![]() | I saw this Ad on tiktok for “remote” jobs, saying that we could review zara clothes by getting a $700 gift card, and we get to keep the clothes, it seems too good to be true. submitted by Senior_Design3778 to Scams [link] [comments] The initial website is dashingreviews.com, then you get relocated to promitionsonlineusa.com to put in your information. I’ve attached the images - it asked for my address, name, email, and phone number, but no payment method. I’m a bit skeptical to give my address. Has anyone tried this before? did it work? |
2024.06.10 21:44 No-Toe-8849 Dealing with missing e-mail address when sending passwords