Intitle: index.of. url

Everyone's Listening, All Supertramp songs, ranked - Land Ho(#83)

2024.05.16 14:03 Agitated-Trick Everyone's Listening, All Supertramp songs, ranked - Land Ho(#83)

"Land Ho" single A-side, 1974
Listen to it here
A song so nice, they recorded it thrice. First as a single in 1974, then re-mixed it for inclusion in Crisis? What Crisis? (although it didn't appear on the album) and finally for Roger's solo career. {1}
Land Ho" was later rerecorded by Roger Hodgson for his 1987 solo album Hai Hai (with new lyrics).
Being the first commercial release of the "classic lineup" consisting of Roger, Rick, John Bob and Dougie, Land Ho barely resembles their later output, being more similiar stilistically to Indelibly Stamped with its somewhat straight forward composition. Of course John's sax during the outro is splendid as always, a sign of things to come. It's just that in terms of catchiness it's way, way inferior to the B-Side, Summer Romance. I guess Supertramp has a history of putting the better songs as the B-side.
Lyrically, it reminds me a bit of Times Have Changed, I think because both go for this "sailor feeling lost" theme. The Hai Hai version is a bit longer and feels more "80s" (especially those drum fills and the reverb on the piano, my god), but I think it's not as good as the original despite the awesome sax fills all throughout.
If you were to ask me which version of the song should be the the definitive one, I'd go with the remix made in 1975 found on Retrospectacle. It's easily the most "complete" version.
{1} Wikipedia
Index
submitted by Agitated-Trick to supertramp [link] [comments]


2024.05.16 13:33 samael6 Issues Searching for Movies on Radarr - API Error Help Needed

Hello everyone,
I'm facing a problem when trying to search for any movie on Radarr. Every time I perform a search, I get the following error message: "Search for '' failed. Invalid response received from RadarrAPI."

Problem Details

Steps to Reproduce

  1. Open Radarr.
  2. Go to the search field.
  3. Search for any movie (e.g., 'The Garfield Movie').

Expected Behavior

The searched movie should be found and listed.

Actual Behavior

An error occurs: "Search for '' failed. Invalid response received from RadarrAPI."

Logs

[Fatal] RadarrErrorPipeline: Request Failed. GET /MediaCoverProxy/7d0f1a0a0a793161319bfa3750b44a99d6c403cb92ce2e5f0dd11d0acc38d652/OHGtRqIim2cEHOYKPlbgNOV6Cb.jpg
[v5.6.0.8846] System.Net.Http.HttpRequestException: Resource temporarily unavailable (image.tmdb.org:443)
- System.Net.Sockets.SocketException (11): Resource temporarily unavailable
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
at System.Net.Sockets.Socket.g__WaitForConnectWithCancellation277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.attemptConnection(AddressFamily addressFamily, SocketsHttpConnectionContext context, CancellationToken cancellationToken) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 337
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.onConnect(SocketsHttpConnectionContext context, CancellationToken cancellationToken) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 313
at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.AddHttp2ConnectionAsync(HttpRequestMessage request)
at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttp2ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, Boolean async, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.DecompressionHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.g__Core83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponseAsync(HttpRequest request, CookieContainer cookies) in ./Radarr.Common/Http/Dispatchers/ManagedHttpDispatcher.cs:line 115
at NzbDrone.Common.Http.HttpClient.ExecuteRequestAsync(HttpRequest request, CookieContainer cookieContainer) in ./Radarr.Common/Http/HttpClient.cs:line 157
at NzbDrone.Common.Http.HttpClient.ExecuteAsync(HttpRequest request) in ./Radarr.Common/Http/HttpClient.cs:line 70
at NzbDrone.Core.MediaCover.MediaCoverProxy.GetImage(String hash) in ./Radarr.Core/MediaCoveMediaCoverProxy.cs:line 67
at Radarr.Http.Frontend.Mappers.MediaCoverProxyMapper.GetResponse(String resourceUrl) in ./Radarr.Http/Frontend/Mappers/MediaCoverProxyMapper.cs:line 46
at Radarr.Http.Frontend.StaticResourceController.MapResource(String path) in ./Radarr.Http/Frontend/StaticResourceController.cs:line 75
at Radarr.Http.Frontend.StaticResourceController.Index(String path) in ./Radarr.Http/Frontend/StaticResourceController.cs:line 47
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.g__Awaited13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.g__Awaited17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.g__AwaitRequestTask6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Radarr.Http.Middleware.BufferingMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/BufferingMiddleware.cs:line 28
at Radarr.Http.Middleware.IfModifiedMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/IfModifiedMiddleware.cs:line 41
at Radarr.Http.Middleware.CacheHeaderMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/CacheHeaderMiddleware.cs:line 33
at Radarr.Http.Middleware.StartingUpMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/StartingUpMiddleware.cs:line 38
at Radarr.Http.Middleware.UrlBaseMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/UrlBaseMiddleware.cs:line 27
at Radarr.Http.Middleware.VersionMiddleware.InvokeAsync(HttpContext context) in ./Radarr.Http/Middleware/VersionMiddleware.cs:line 29
at Microsoft.AspNetCore.ResponseCompression.ResponseCompressionMiddleware.InvokeCore(HttpContext context)
at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.g__Awaited6_0(ExceptionHandlerMiddleware middleware, HttpContext context, Task task)``
Additional Information
I have checked the Pi-hole DNS resolver, and all requests are fine. I also changed the DNS on the host machine to 1.1.1.1 and 8.8.8.8, but the problem persists. Accessing the API directly via the browser (https://image.tmdb.org/) returns a "Bad request" error, indicating that the request might be incorrect or that the connection to the HTTPS server is being closed.
Any help or suggestions would be greatly appreciated!Additional InformationI have checked the Pi-hole DNS resolver, and all requests are fine. I also changed the DNS on the host machine to 1.1.1.1 and 8.8.8.8, but the problem persists. Accessing the API directly via the browser (https://image.tmdb.org/) returns a "Bad request" error, indicating that the request might be incorrect or that the connection to the HTTPS server is being closed.Any help or suggestions would be greatly appreciated!
EDIT:
I already open 2 issues https://github.com/RadarRadarissues/10030 and https://github.com/linuxservedocker-radarissues/229
submitted by samael6 to radarr [link] [comments]


2024.05.16 12:21 Big-Knowledge9926 Kinaxis Selected by Harley-Davidson as Supply Chain Management Platform Solution

(BUSINESS WIRE) -- Kinaxis® Inc. (TSX: KXS), a global leader in end-to-end supply chain orchestration, today announced that Harley-Davidson (NYSE: HOG), the world’s most iconic motorcycle brand, has selected Kinaxis to accelerate the transformation of the company’s global supply chain. https://www.aetoswire.com/en/news/1505202439370
submitted by Big-Knowledge9926 to u/Big-Knowledge9926 [link] [comments]


2024.05.16 11:45 mame_is_me yt_dlp fail with any youtube video

Any idea why it fails on every youtube video? Using v3.9 with pip install yt_dlp -U
the error is
[youtube] Extracting URL: https://www.youtube.com/watch?v=ZrQJaLemaoE [youtube] ZrQJaLemaoE: Downloading webpage WARNING: [youtube] unable to extract initial player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [youtube] ZrQJaLemaoE: Downloading ios player API JSON WARNING: [youtube] ZrQJaLemaoE: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [youtube] ZrQJaLemaoE: Downloading android player API JSON WARNING: [youtube] ZrQJaLemaoE: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [youtube] ZrQJaLemaoE: Downloading iframe API JS WARNING: [youtube] unable to extract player version; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [youtube] ZrQJaLemaoE: Downloading web player API JSON WARNING: [youtube] ZrQJaLemaoE: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U

ERROR: [youtube] ZrQJaLemaoE: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U

ExtractorError Traceback (most recent call last) File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py:1606, in YoutubeDL._handle_extraction_exceptions..wrapper(self, args, *kwargs) 1605 try: -> 1606 return func(self, args, *kwargs) 1607 except (DownloadCancelled, LazyList.IndexError, PagedList.IndexError):
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/ytdlp/YoutubeDL.py:1741, in YoutubeDL._extract_info(self, url, ie, download, extra_info, process) 1740 try: -> 1741 ie_result = ie.extract(url) 1742 except UserNotLive as e:
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/extractocommon.py:734, in InfoExtractor.extract(self, url) 732 self.to_screen('Extracting URL: %s' % ( 733 url if self.get_param('verbose') else truncate_string(url, 100, 20))) 734 ie_result = self._real_extract(url) 735 if ie_result is None:
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/extractoyoutube.py:4079, in YoutubeIE._real_extract(self, url) 4077 webpage_url = base_url + 'watch?v=' + video_id -> 4079 webpage, master_ytcfg, player_responses, player_url = self._download_player_responses(url, smuggled_data, video_id, webpage_url) 4081 playability_statuses = traverse_obj( 4082 player_responses, (..., 'playabilityStatus'), expected_type=dict)
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/extractoyoutube.py:4043, in YoutubeIE._download_player_responses(self, url, smuggled_data, video_id, webpage_url) 4041 master_ytcfg = self.extract_ytcfg(video_id, webpage) or self._get_default_ytcfg() -> 4043 player_responses, player_url = self._extract_player_responses( 4044 self._get_requested_clients(url, smuggled_data), 4045 video_id, webpage, master_ytcfg, smuggled_data) 4047 return webpage, master_ytcfg, player_responses, player_url
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/extractoyoutube.py:3733, in YoutubeIE._extract_player_responses(self, clients, video_id, webpage, master_ytcfg, smuggled_data) 3732 elif not prs: -> 3733 raise ExtractorError('Failed to extract any player response') 3734 return prs, player_url
ExtractorError: [youtube] ZrQJaLemaoE: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
During handling of the above exception, another exception occurred:
DownloadError Traceback (most recent call last) Cell In[8], line 10 3 ydl_opts = { 4 "write_auto_sub":True, 5 "sub_lang":'en', 6 "skip_download":True, 7 "output":'/tmp/sub.txt', 8 } 9 with yt_dlp.YoutubeDL(ydl_opts) as ydl: - 10 ydl.download(url)
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/ytdlp/YoutubeDL.py:3572, in YoutubeDL.download(self, url_list) 3569 raise SameFileError(outtmpl) 3571 for url in url_list: -> 3572 self._download_wrapper(self.extract_info)( 3573 url, force_generic_extractor=self.params.get('force_generic_extractor', False)) 3575 return self._download_retcode
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/ytdlp/YoutubeDL.py:3547, in YoutubeDL._download_wrapper..wrapper(args, *kwargs) 3544 @functools.wraps(func) 3545 def wrapper(args, *kwargs): 3546 try: -> 3547 res = func(args, *kwargs) 3548 except UnavailableVideoError as e: 3549 self.report_error(e)
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/ytdlp/YoutubeDL.py:1595, in YoutubeDL.extract_info(self, url, download, ie_key, extra_info, process, force_generic_extractor) 1593 raise ExistingVideoReached() 1594 break -> 1595 return self._extract_info(url, self.get_info_extractor(key), download, extra_info, process) 1596 else: 1597 extractors_restricted = self.params.get('allowed_extractors') not in (None, ['default'])
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py:1624, in YoutubeDL._handle_extraction_exceptions..wrapper(self, args, *kwargs) 1622 self.report_error(msg) 1623 except ExtractorError as e: # An error we somewhat expected -> 1624 self.report_error(str(e), e.format_traceback()) 1625 except Exception as e: 1626 if self.params.get('ignoreerrors'):
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py:1073, in YoutubeDL.report_error(self, message, args, *kwargs) 1068 def report_error(self, message, args, *kwargs): 1069 ''' 1070 Do the same as trouble, but prefixes the message with 'ERROR:', colored 1071 in red if stderr is a tty file. 1072 ''' -> 1073 self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', args, *kwargs)
File /uslocal/anaconda3/envs/py39/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py:1012, in YoutubeDL.trouble(self, message, tb, is_error) 1010 else: 1011 exc_info = sys.exc_info() -> 1012 raise DownloadError(message, exc_info) 1013 self._download_retcode = 1
DownloadError: ERROR: [youtube] ZrQJaLemaoE: Failed to extract any player response; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U

submitted by mame_is_me to youtubedl [link] [comments]


2024.05.16 10:48 DT-Sodium JavaScript application not indexed because of access restrictions on it's API

Hello,
I developed an online shop with an Angular front-office and a PHP API. I noticed that the shop's pages are not indexed properly, showing our server error popup instead of the correct content and using Google Search Console to inspect urls, the problems seems to be that Google crawlers are denies access to the API.
My API has indeed a disallow all in it's robots.txt because i don't want my API to show up in search engine results, but i still want crawlers to be able to query it so they can display the front-end properly. How can i achieve that?
submitted by DT-Sodium to SEO [link] [comments]


2024.05.16 10:08 Mart-coder76 Export Table(s) from desmos grafic calculator

Hi, I always missed to export Tables from Desmos via copy-past to i.e. excel.
Wrote a little JS Script with a little help of chatGPT doing the trick by converting to a regular html-table.
I use it as a bookmarklet, so it is very easy to use. Open your web browser's bookmarks or favorites manager.
In most browsers, you can do this by pressing Ctrl+Shift+B (Windows) or Cmd+Shift+B (Mac).
Create a new bookmark.
You can name it something like "Desmos Tables". Copy the following code: javascript:(function(){let tableContainers=document.querySelectorAll('.dcg-table-container');let zIndexStart=1000;tableContainers.forEach((container,index)=>{let tableDiv=document.createElement('div');tableDiv.style.position='fixed';tableDiv.style.top=\${40+index15}px`;tableDiv.style.left=`${40+index15}px`;tableDiv.style.width='40%';tableDiv.style.height='80%';tableDiv.style.border='1px solid #ccc';tableDiv.style.backgroundColor='white';tableDiv.style.padding='10px';tableDiv.style.zIndex=zIndexStart+index;tableDiv.style.overflowY='auto';tableDiv.style.boxShadow='0 4px 8px rgba(0,0,0,0.1)';tableDiv.style.resize='both';tableDiv.style.userSelect='text';let closeButton=document.createElement('span');closeButton.innerText='x';closeButton.style.position='absolute';closeButton.style.top='5px';closeButton.style.right='10px';closeButton.style.cursor='pointer';closeButton.style.fontWeight='bold';closeButton.onclick=()=>tableDiv.remove();let heading=document.createElement('h2');heading.innerText=`Table ${index+1}`;heading.style.margin='0 0 10px 0';heading.style.fontSize='16px';heading.style.fontWeight='bold';let htmlTable=document.createElement('table');htmlTable.style.width='100%';htmlTable.style.borderCollapse='collapse';htmlTable.style.userSelect='text';let rows=container.querySelectorAll('.dcg-row');rows.forEach(row=>{let htmlRow=document.createElement('tr');let cells=row.querySelectorAll('.dcg-cell');cells.forEach(cell=>{let htmlCell=document.createElement('td');htmlCell.style.border='1px solid #ddd';htmlCell.style.padding='8px';htmlCell.style.userSelect='text';let cellContent=cell.querySelector('.dcg-mq-root-block');if(cellContent&&cellContent.innerText.trim()!==""){htmlCell.innerText=cellContent.innerText.trim();}else{htmlCell.innerHTML=' ';}htmlRow.appendChild(htmlCell);});htmlTable.appendChild(htmlRow);});tableDiv.appendChild(heading);tableDiv.appendChild(htmlTable);tableDiv.appendChild(closeButton);document.body.appendChild(tableDiv);});if(tableContainers.length===0){alert('No tables found.');}else{alert(`${tableContainers.length} table(s) found and displayed.`);}})();`
Paste the copied code into the URL field of the new bookmark. Ensure the URL starts with javascript: and that the entire code is on a single line without any line breaks. Save the bookmark.
How to Use the Bookmarklet Navigate to a Desmos page that contains tables. Click on the bookmarklet in your bookmarks toolbar.
The bookmarklet will find the tables on the page, display them in floating div elements, and number each table. Empty cells will contain a non-breaking space to ensure they are copied correctly.
Have fun!
submitted by Mart-coder76 to desmos [link] [comments]


2024.05.16 09:07 Emcf Building Multimodal Apps with GPT-4O

I'm sure most of you know OpenAI just announced GPT-4o, a new model that can reason across audio, vision, and text in real time (See OpenAI's demo on YouTube)
By chance, I've been working with multimodal models far before GPT-4o's release, so I feel well-positioned to leave a guide here for anyone looking to build something with it!
Recently, I released an open source library so you can extract data in multiple modalities to feed your AI-based Python projects. In this post, I'll show you how to use it alongside GPT-4o with the OpenAI API to build multimodal apps with it. I've got nothing better to do right now, so I'll walk through the steps of extracting all multimodal content from different sources, preparing the input for GPT-4o, sending it to the model for processing, and getting our results back.
before getting into the code, let's just stop and ask ourselves why we'd use GPT-4o over previous models like GPT-4-turbo:
Multi-modal Input and Output: GPT-4o can handle text, audio, and image inputs and generate outputs in any of these formats.
Real-time Processing: The model can respond to audio inputs in as little as 232 milliseconds, making it suitable for real-time applications.
Improved Performance: GPT-4o matches GPT-4 Turbo performance on text in English and code, with significant improvements in non-English languages, vision, and audio understanding.
Cost and Speed: GPT-4o is 50% cheaper and 2x faster than GPT-4 Turbo, with 5x higher rate limits.
Ok, let's get to the code lol:

Step 1: Extract!

This can be done using The Pipe API which can handle various file types and URLs, extracting text and images in a format that GPT-4o can understand.
For example, if we were analyzing a talk based on a scientific paper, we could combine the two sources to provide a comprehensive input to GPT-4o:
from thepipe_api import thepipe # Extract multimodal content from a PDF pdf = thepipe.extract("path/to/paper.pdf") # Extract multimodal content from a YouTube video vid = thepipe.extract("https://youtu.be/dQw4w9WgXcQ") 

Step 2: Prepare the Input for GPT-4o

Here's an example of how to prepare the input prompt by simply combining the extracted content with a question from the user:
# Add a user query query = [{ "role": "user", "content": "Which figures from the paper would help answer the question at the end of the talk video?" }] # Combine the content to create the input prompt for GPT-4o messages = pdf + vid + query 

Step 3: Send the Input to GPT-4o

With the input prepared, you can now send it to GPT-4o using the OpenAI API. Make sure you have your OPENAI_API_KEY set in your environment variables.
from openai import OpenAI # Initialize the OpenAI client openai_client = OpenAI() # Send the input to GPT-4o response = openai_client.chat.completions.create( model="gpt-4o", messages=messages, ) # Print the response print(response.choices[0].message.content) 

All done!

PS:
If you have literally no idea what I'm talking about, check out the OpenAI GPT-4O announcement!.
If you're a developer, feel free to access or contribute to The Pipe on GitHub! It is important to note that OpenAI's GPT-4o model is only accepting textual and visual modalities at release, however we will be carefully monitoring the new modalities released for GPT-4o in the coming weeks and updating the library accordingly.
submitted by Emcf to ArtificialInteligence [link] [comments]


2024.05.16 08:12 EvolvingMind 🔥 5 things you shouldn't miss this weekend in Vienna

Nicely formatted web version here: https://wienwasgeht.beehiiv.com/p/best-things-to-do-in-vienna-may

🇦🇹 Wochenende 16.5 - 19.5

Thur:🌤️20° Fr: 🌧️18° Sat:☀️23° Sun:🌤️23° 🇬🇧 Scroll down for English👇

🤌 Laurin’s Picks

Die Woche mit ein paar extra Veranstaltungen für alle, die den Sonntagabend auch nutzen möchten! 🙂

1/ Thursday Special: Free Live Concert (Thur 19h - Free)

Am Donnerstag gibt es wieder mal ein Konzert in der Superbude! Dieses mal gibt es ‘James Choice’ zu hören. Hier kann man reinhören. Vermutlich werde ich auch dort sein, also wenn mich wer sieht, sagt hallo, würde mich freuen! 🙂
Alternativ eröffnet auch der ‘Sky Garden’ mit Afterwork Drinks und einem super Blick über die Stadt!

2/ Flohmarkt Karmelitamarkt (Thur-Fr 08-20h - Free)

Neben den vielfältigen und exklusiven Sonderangeboten der ansässigen Fachgeschäfte und den Privaten Ausstellern, werden vor allem Künstler mit besonderen Einzelstücken das Herz aller Sammler und Kunstliebhabern höher schlagen lassen.

3/ International Documentary Film Festival (Thur-Sun - €9)

Ab Donnerstag beginnt das Internationale Documentary Film Festival in Wien! Die Säle des Votiv und De France Kinos werden wieder zum Zentrum der dokumentarischen Filmkunst und des ethnographischen Filmschaffens.
Vielleicht gut geeignet für den verregneten Freitag? 🌧️

4/ Asian & Austrian street food and culture festival (Fr-Sun 10-22h - Free)

Eine Mischung aus Kultur und street food festival mit Fokus auf asiatische und österreichische Kultur und Küche. Klingt irgendwie ein bisschen verrückt, aber vielleicht lustig? Kann man auf jeden Fall gut mit einem Ausflug auf die Insel verbinden!

5/ Open Air Rave (Sun 15-22h - €10)

Am Sonntag gib es im Prater einen recht coolen Open Air rave von ‘Du Tanzt Mich Mal’. Am Mainfloor gibt es House, Minimal und Acid und am Rooftop Floor downtempo.
Achja, alternativ kann man auch direkt am Kobenzl Raven bei ‘Kein Sonntag ohne Techno’.

🎨 A little bit of culture & other 💎

🔥Party Party

What are the best parties this weekend?
Thursday
Friday
Saturday
Sunday
Cosmic Soul - Pratersauna // Sex positive, techno, psytrance
Steve Hope - Vienna Hope Beach Club // Electronic
Live Music (Tobias Kirchebner) - Gleis Garten // Live concert
Felix Jaehn - O der Klub // House
Heaven Vienna (Orange Edition) - The Loft // Pop, House
Kein Sonntag Ohne Techno - Schloss Cobenzl // Techno
Get this every week into your inbox here: https://wienwasgeht.beehiiv.com/
--------------------

🇬🇧 English Version

Thur:🌤️20° Fr: 🌧️18° Sat:☀️23° Sun:🌤️23

🤌 Laurin’s Picks

This week a couple of extra events as the Monday is a public holiday! 🙂 Let’s get to it 🚀

1/ Thursday Special: Free Live Concert (Thur 19h - Free)

This Thursday there will be another free concert at Superbude in the second district. This time there will be a concert of James Choice, you can listen to the music here. I will most likely be there, so say hello if you are also there! 🙂
Alternatively, this weekend the ‘Sky Garden’ is officially kicking off the season so if you want a nice after work drink with an amazing view, this is the place to be!

2/ Second Hand Market Karmelitamarkt (Thur-Fr 08-20h - Free)

Next to special offers from all the local shops, there will be a variety of artists displaying their art right in the second district! Great if you enjoy browsing through some unique art!

3/ International Documentary Film Festival (Thur-Sun - €9)

This Friday there will be the kick-off of the international film festival in Vienna. There will be various documentaries shown in both the Votiv as well as the De France cinema.
Maybe perfect for a rainy Friday? 🌦️

4/ Asian & Austrian street food and culture festival (Fr-Sun 10-22h - Free)

Potentially a strange mixture of different cultures, food and traditions. At the same time could be pretty cool and refreshing to combine all of this into one street food festival!
Either way, it’s happening on the Donauinsel, so if the weather is nice, you can definitely enjoy this as a small trip! 🙂

5/ Open Air Rave (Sun 15-22h - €10)

This Sunday there are a couple of nice raves! First up, at Prater with ‘Du Tanzt Mich Mal’. They will play a mixture of house, minimal, acid and downtempo.
Alternatively, you can also check out the party happening at the Kobenzl, which will provide you with probably the best view over Vienna you can get. The party there is ‘Kein Sonntag ohne Techno’.

🎨 A little bit of culture & other 💎

🔥Party Party

What are the best parties this weekend?
Thursday
Friday
Saturday
Sunday
------------

⭐️️ How did I do?

Ihr könnt mir auch direkt unter [laurin.wirth@gmail.com](mailto:laurin.wirth@gmail.com) schreiben.
------------
Get notified for the next version here
submitted by EvolvingMind to wien [link] [comments]


2024.05.16 06:12 Forward_Day_1462 [NM] NASA Artemis Space Launch System 10341 - 69 spots @ $5 each

Item Name/Set Number: NASA Artemis Space Launch System 10341
Set Price: $287 w/ tax
Shipping: $58 - UPS Ground, 24x16x7in, 12 lbs, insured to $695, 94607 to 10012
Raffle Total: $345 - 69 spots @ $5 each
Justification: Lego.com
Call spots? Yes
Spot limit per person? N
Duration of spot limit? N/A
Location(Country): USA
Will ship international? No, sorry.
Timestamp: https://imgur.com/a/xqZVxY1
Description: to infinity and beyond?! Box in good condition. See pics.
Payment required within 20 minutes of raffle filling. 10 minutes for drama.
DO NOT ADD A COMMENT TO YOUR PAYMENT; IF YOU HAVE ANY QUESTIONS, PLEASE MESSAGE ME BEFORE PAYING

PayPal Info: DM me for PP
Cash App Info: https://cash.app

Tip BlobAndHisBoy
Number of vacant slots: 9
Number of unpaid users: 4
Number of unpaid slots: 13
This slot list is created and updated by The EDC Raffle Tool by BlobAndHisBoy.
1 timotatoe PAID
2
3 timotatoe PAID
4 manomacho PAID
5 azcpl121314 PAID
6 ssj3dvp11 PAID
7 Bosskz PAID
8 Bosskz PAID
9 TeddyLea PAID
10 azcpl121314 PAID
11 ssj3dvp11 PAID
12 daisho87 PAID
13 robob280 PAID
14 Echobomb23
15 Echobomb23
16 manomacho PAID
17 ZappBrannigansLaw PAID
18 ZappBrannigansLaw PAID
19 robob280 PAID
20 manomacho PAID
21 azcpl121314 PAID
22 TeddyLea PAID
23 Echobomb23
24
25 Bosskz PAID
26 emptyanalysis
27
28 robob280 PAID
29 manomacho PAID
30 Echobomb23
31 timotatoe PAID
32 timotatoe PAID
33
34 ZappBrannigansLaw PAID
35 TeddyLea PAID
36 emptyanalysis
37 stollba PAID
38 Fast-Associate8069
39 TeddyLea PAID
40 ssj3dvp11 PAID
41 timotatoe PAID
42 timotatoe PAID
43 emptyanalysis
44 ssj3dvp11 PAID
45 robob280 PAID
46 yo-Marie-yo PAID
47
48 robob280 PAID
49 azcpl121314 PAID
50 Bosskz PAID
51 Bosskz PAID
52 ImSoChopped PAID
53 robob280 PAID
54 emptyanalysis
55
56
57 Echobomb23
58 Fast-Associate8069
59 timotatoe PAID
60 stollba PAID
61 timotatoe PAID
62
63 emptyanalysis
64 vangobroom97
65
66 ssj3dvp11 PAID
67 timotatoe PAID
68 timotatoe PAID
69 robob280 PAID

submitted by Forward_Day_1462 to lego_raffles [link] [comments]


2024.05.16 04:53 INFJPersonality-52 HOME OWNERS ASSOCIATIONS

HOME OWNERS ASSOCIATIONS
Welcome to Kelly Kerr's Condo Corner, your go-to source for insights into the dynamic world of community associations. Today, we delve into the fascinating realm of homeowner associations (HOAs) in Florida, where some associations, despite resembling condominiums in many aspects, are legally classified as HOAs. Let's explore the nuances of these associations and the implications for homeowners and managers alike.
In Florida, the legal framework governing HOAs is outlined in Chapter 720 of the Florida Statutes. This comprehensive statute sets forth the rights, responsibilities, and governance structure for HOAs, ensuring transparency and accountability in community management.
One intriguing aspect of HOAs in Florida is the existence of associations that function similarly to condominiums yet are classified as HOAs due to the language in their governing documents. These associations, often called "condo-like" HOAs, may feature amenities and services typically associated with condominium living, such as shared common areas, recreational facilities, and maintenance responsibilities.
Despite their similarities to condominiums, condo-like HOAs operate under a distinct legal framework governed by Chapter 720 of the Florida Statutes. This distinction is significant, as it impacts various aspects of community governance, including the election and powers of the association's board of directors, enforcement of rules and regulations, and collecting assessments.
For homeowners and managers involved in these condo-like HOAs, understanding the nuances of the legal framework is essential for effective governance and compliance. From conducting board meetings and budget planning to resolving disputes and enforcing covenants, managers play a crucial role in ensuring the association's smooth operation and its residents' well-being.
One challenge condo-like HOAs face is the potential for confusion among homeowners regarding their rights and obligations. Due to the similarities with condominiums, homeowners may mistakenly assume that their association operates under condominium laws, leading to misunderstandings and disputes. Clear communication and education initiatives can help mitigate these challenges and ensure that homeowners are well-informed about the unique aspects of their HOA.
Despite the complexities, condo-like HOAs offer homeowners the opportunity to enjoy the benefits of community living while retaining the flexibility and autonomy afforded by the HOA structure. By working collaboratively with homeowners and managers, these associations can create vibrant and harmonious communities where residents thrive and take pride in their shared environment.
As we continue to explore the diverse landscape of community associations in Florida, Kelly Kerr's Condo Corner remains dedicated to providing valuable insights and resources to empower homeowners and managers alike. Stay tuned for more informative articles and expert analysis on all things related to community living.
As always, I am not an attorney. The link to the Florida Statute is below:
http://www.leg.state.fl.us/statutes/index.cfm?App_mode=Display_Statute&URL=0700-0799/0720/0720.html
submitted by INFJPersonality-52 to Condo_Corner_Kelly_Ke [link] [comments]


2024.05.16 04:07 codeyCode Can someone explain why this form code will not save more than one record, and offer a solution?

I'm trying to write a form that will loop through a table of questions and print answer choices for each question.
The user selects one answer choice from each question and each of those answers gets saved to an Answers table as a separate record.
However, my code will only save the last question.
I'm using html instead of Rails form helpers and want to stick with that. Basically, there is javascript that takes the answer choice and updates a hidden field. This works, but upon submission, only the least option actually gets put into the table.
Any idea how to fix this so that it works. I've been trying to figure this out for a month
``` <%= form_with model: quizModel , url: quiz_answers_path do myAnswer %>
<% prompts.each_with_index do question, q_index %>
<%= myAnswer.hiddenfield :user_id, value: current_user.id %> <%= myAnswer.hidden_field :question_id, value: question.id %> <%= myAnswer.hidden_field :answer_value, value: "", id: "question#{q_index}" %>
<%= question.text %>
<% question.answerText.each_with_index do answer, a_index %>
>
Yes
>
No
>
I don't know
<% end %>
<% end %>
<%= myAnswer.submit "SAVE", class: "btns submit-btns", id: "signup-submit-btn" %> <% end %>
```
submitted by codeyCode to rails [link] [comments]


2024.05.16 02:00 pelicano234 [MAIN] Jurassic Park T Rex Rampage 75936 - 80 spots at $5/ea

CashApp and Gpay payments should have NO COMMENTS. IF YOU HAVE QUESTIONS PLEASE PM ME. Comments will result in a permanent ban
Item Name Set Number: Jurassic Park T Rex Rampage 75936
Lego Price/Justification: $334 BL
Shipping: $66 (24,20,7 15 lbsups ground 53186 to 99026)
Raffle Total/Spots: $400 = 80 @ $5
Call spots: Yes
Spot limit per person: No
Duration of spot limit: N/A
Location(Country): USA
Will ship international: No sorry. Hawaii/ Alaska/ APO pay cost above listed quote.
Timestamp pics: https://imgur.com/a/9MKNFsA
Description: Dino or chicken? Box squished a bit
Payment required w/in 10 minutes of raffle filling. 5 min for drama
CashApp and Gpay payments should have NO COMMENTS. IF YOU HAVE QUESTIONS PLEASE PM ME. Comments will result in a permanent ban
PM me for Gpay NO PAYPAL

PayPal Info: Pm for Gpay. No pp
Cash App Info: https://cash.app

Tip BlobAndHisBoy
Number of vacant slots: 45
Number of unpaid users: 8
Number of unpaid slots: 28
This slot list is created and updated by The EDC Raffle Tool by BlobAndHisBoy.
1
2
3 therontheteej
4 Fluffy-Run-6861 PAID
5 Fluffy-Run-6861 PAID
6 legoislifey
7 Fluffy-Run-6861 PAID
8
9
10 azcpl121314
11 gfy4dsny
12
13
14
15 Bringatowel1
16 Bringatowel1
17 gfy4dsny
18
19 gfy4dsny
20
21 azcpl121314
22
23 legoislifey
24 azcpl121314
25
26 Bringatowel1
27
28
29 Bringatowel1
30
31 gfy4dsny
32
33 gfy4dsny
34
35 Ok_Experience9994 PAID
36 damnstraight
37 legoislifey
38
39
40 gfy4dsny
41 LegoRaffleWinner89 PAID
42 Fluffy-Run-6861 PAID
43 DrSeussFreak
44 Bringatowel1
45
46
47
48
49
50 robob280
51
52
53
54
55
56
57
58
59
60
61
62 gfy4dsny
63 azcpl121314
64
65 damnstraight
66 damnstraight
67 Ok_Experience9994 PAID
68
69 legoislifey
70
71
72
73
74
75 gfy4dsny
76
77 damnstraight
78
79
80

submitted by pelicano234 to lego_raffles [link] [comments]


2024.05.15 19:25 Polochyzz Databricks asset bundle & private nexus

Hey
We are using DAB to deploy our piplines & python wheel. It works fine.
Recently, our security team asked us to push our wheel in Nexus during CI process. Not a big deal tbh ...
Instead of pushing .wheel into lake storage, we push it on nexus.
BUT ... we have no idea how to setup DAB to use .wheel from private nexus.
Network seems to be "ok", we can install our wheel using pip with correct --index-url & --trusted-host parameters within a notebook, but what about DAB?
pip command : "pip install --index-url XXX --trusted-host YY
I didn't find any documentation about defining theses parameters in bundle.yaml.
TASK DEFINITION :
Basically, I want to install .wheel in my Workflow cluster direclty from my private nexus using DAB.
Thank you !
submitted by Polochyzz to databricks [link] [comments]


2024.05.15 19:15 Hallyu_Otaku I would appreciate an honest opinion about my K-pop blog concerning Google's policies

Hello,
I've been keeping a blog about the new releases of K-pop girl groups and female solo artists for the past 5 years.
The reason for this post is, that after 5 years and almost 600 unique posts, not a single page of this blog appears on Google search. For 5 years nobody has visited my blog. This has been verified by Google Console. All my pages are “not indexed” and I quote: “Pages that aren’t indexed can’t be served on Google”.
I keep asking Google again and again to index my blog pages and nothing. I believe there must be some technical reason but every single one person that replied to me had said that my blog is just too bad to be on Google and it is because of the content and not a technical issue.
So, I would like to ask people who like K-pop and not SEO and Google experts to tell me honestly what is so bad with my blog that it doesn't deserve to be even in the 100000th page of a Google search enquiry.
Please tell me honestly because I have reached my limit. At this point I realized that for 5 years I was keeping a diary, that only I could see, and not a blog.
Todays answer from Google was the following: “After checking the given URL, I have found that you might need to put more effort into creating some more great content that people would want to read. It should be your original content and is something Google would want to show to its users. If Google's algorithms have determined that the site lacks value over those other sites already in the index it isn't going to index more. There may simply be too much volume on this subject already.”
I obviously write my own content and my reviews are my own. Some are positive and some are slightly negative, but they are all honest reviews by a person who likes K-pop and has been a fan for over 15 years.
So why is my blog worthless? Why none of my 600 pages deserve to be on Google search? I really don't get it and I honestly want your opinion.
Today's blog post is this: https://one-in-a-million-kpop.blogspot.com/2024/05/itzy-release-japanese-single-algorhythm.html
At this point Google is telling me to stop since they are never going to index my blog.
What is your opinion?
Thank you.
submitted by Hallyu_Otaku to kpophelp [link] [comments]


2024.05.15 19:04 Emcf [Easy Guide] Building Multimodal Apps with GPT-4O

Ok so OpenAI just announced GPT-4o, a new model that can reason across audio, vision, and text in real time (unheard of for a model of this intelligence for those unfamiliar). According to OpenAI, GPT-4o "accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs". (See OpenAI's demo on YouTube)
Recently, I released an open source library so you can extract data in multiple modalities to feed your AI-based Python projects. In this post, I'll show you how to use it alongside GPT-4o with the OpenAI API to build multimodal apps with it. I've got nothing better to do right now, so I'll walk through the steps of extracting all multimodal content from different sources, preparing the input for GPT-4o, sending it to the model for processing, and getting our results back.
before getting into the code, let's just stop and ask ourselves why we'd use GPT-4o over previous models like GPT-4-turbo:
Multi-modal Input and Output: GPT-4o can handle text, audio, and image inputs and generate outputs in any of these formats.
Real-time Processing: The model can respond to audio inputs in as little as 232 milliseconds, making it suitable for real-time applications.
Improved Performance: GPT-4o matches GPT-4 Turbo performance on text in English and code, with significant improvements in non-English languages, vision, and audio understanding.
Cost and Speed: GPT-4o is 50% cheaper and 2x faster than GPT-4 Turbo, with 5x higher rate limits.
Ok, let's get to the code lol:

Step 1: Extract!

This can be done using The Pipe API which can handle various file types and URLs, extracting text and images in a format that GPT-4o can understand.
For example, if we were analyzing a talk based on a scientific paper, we could combine the two sources to provide a comprehensive input to GPT-4o:
from thepipe_api import thepipe # Extract multimodal content from a PDF pdf = thepipe.extract("path/to/paper.pdf") # Extract multimodal content from a YouTube video vid = thepipe.extract("https://youtu.be/dQw4w9WgXcQ") 

Step 2: Prepare the Input for GPT-4o

Here's an example of how to prepare the input prompt by simply combining the extracted content with a question from the user:
# Add a user query query = [{ "role": "user", "content": "Which figures from the paper would help answer the question at the end of the talk video?" }] # Combine the content to create the input prompt for GPT-4o messages = pdf + vid + query 

Step 3: Send the Input to GPT-4o

With the input prepared, you can now send it to GPT-4o using the OpenAI API. Make sure you have your OPENAI_API_KEY set in your environment variables.
from openai import OpenAI # Initialize the OpenAI client openai_client = OpenAI() # Send the input to GPT-4o response = openai_client.chat.completions.create( model="gpt-4o", messages=messages, ) # Print the response print(response.choices[0].message.content) 

All done!

PS:
If you have literally no idea what I'm talking about, check out the OpenAI GPT-4O announcement!.
If you're a developer, feel free to access or contribute to The Pipe on GitHub! It is important to note that OpenAI's GPT-4o model is only accepting textual and visual modalities at release, however we will be carefully monitoring the new modalities released for GPT-4o in the coming weeks and updating the library accordingly.
submitted by Emcf to ChatGPTPromptGenius [link] [comments]


2024.05.15 18:50 Spooker0 Grass Eaters 52 Just Passing Through

Previous
First Series Index Galactic Map State of War Map RoyalRoad Patreon Discord

MNS Oengro

“How’s the fuel status of the Oengro?” Grionc asked.
Vastae, eyes glued to his console, replied without hesitation. “We have just enough blink fuel for one jump, but we aren’t going anywhere once we get to the other side without a refueling ship.”
“One blink is all we need. And if the Oengro is good to go, the other, smaller ships should be fine too then,” Grionc responded, bringing up the system map on screen with her paws. “Four minutes to blink limit. Have the ship’s crew secure themselves for the blink and get ready for shift change to execute post-blink procedures when we arrive.”
“Yes, High Fleet Commander,” Vastae acknowledged with a brisk nod.
Suddenly, three quarters of the sensor readings on her sensor board disappeared, and the fidelity on the remaining took a nose-dive in accuracy. A low murmur ran through the sensor stations, which she waved away with a paw. “No need to panic. It looks like our friends jumped before we did, as arranged. Our sensors are on their own for now.”
Vastae swallowed hard. “Are you certain about this plan, High Fleet Commander?” Vastae asked nervously. “Not that I don’t trust what Sphinx— Speinfoent cooked up, but this is a last-minute plan modification we haven’t rehearsed. And with our fuel situation, we only get one chance here.”
Grionc put a calm smile on her face. “Remember that exercise we did with the Grass Eaters a while back?”
“Which one?”

4 months ago

“Since it’s New Years, it’s time to have some fun,” Mark announced with a grin to Grionc and the rest of the curious bridge crew. “I’m going to show you guys a fun teambuilding exercise we did on Terra.”
“Teambuilding exercise?” Grionc asked suspiciously.
Mark didn’t let her skepticism color his enthusiasm. “Well, I’m not sure how much teambuilding it does, but it is fun. And I have never seen aliens do it. In fact, this might be the first time this has ever been done outside of Sol!”
“Fine, fine. What are we doing?” she relented.
“This exercise is what we call the trust fall.”
“The trust fall?” Grionc repeated. “It’s about building trust? Like trust in your crew?”
Mark nodded vigorously. “It’s supposed to. I’m not sure if it truly works, but it truly is fun. You and I can demonstrate for the crew.”
Grionc sighed. “Sure. What do I do?”
“Come stand over here,” Mark pointed to a spot on the floor, and then stood in front of her with his back to her. “What I’m going to do is I’m going cross my arms… like this… and on the count of three, I’m going to fall backwards, and you have to catch me when I do.”
“Huh. That seems dangerous. What happens to you if I don’t catch you?” Grionc asked, mild concern creeping into her voice.
“Traumatic brain injury, probably. Something similar for your species too, I assume,” Mark shrugged nonchalantly. “But don’t worry about that. We have good medical facilities on the Nile, and you will catch me. That is the point of the exercise. Alright, you ready?”
Sensing his insistence, Grionc sighed and held her paws out, bracing herself. “Ready.”
“One, two, three…” Mark did as he described, crossing his arms, and falling backwards into Grionc’s outstretched arms. She grunted with slight effort as she intercepted his fall and then gently lowered him onto the ground, “Oomph. Huh. You Terrans are lighter than you look.”
“Yeah, my bones are nano-grafted,” Mark grinned, bounced up to full height, and circled around her back. “Okay, now it’s your turn.”
Grionc crossed her arms and held her breath for a moment. “One, two…”
She didn’t move. A few seconds later, she let go of her held breath. “I can’t.”
“What? Why not?”
Grionc muttered excuses. “No, it’s just— my tail— our balance mechanisms are different, I can’t just fall backwards on purpose—”
Mark insisted. “It’s not that difficult. Just let go. Don’t worry. I’m right here. I promise I’ll catch you.”
She held her breath once again, psyching herself up for a few more moments.
“One, two… doh, I can’t.”
Mark lightly patted her on the shoulder. “That’s okay… don’t worry… Hey, Speinfoent, come over here and give her a light shove. Alright, on the count of three. One, two—”
“Oh, no. Don’t you dare! No! Don’t touch— Yowwwwwww!”
Grionc continued, “And now… we fall. And we trust that our new friends will be there to catch us.”

ZNS 2228

“They’ve blinked,” the computer officer reported.
“Did we catch their blink vector?” Skvanu asked urgently.
“Calculating… got it! We triangulated their blink vector and probable destination! Entering it into our fleet navigation computers,” she responded, paws flying over the controls.
“How long before we can execute the blink?” Skvanu pressed.
“Two minutes before we hit the limit ourselves,” she replied, not looking up.
“Good, get the crews ready and start the countdown. I want to blink the millisecond we are clear of the system limit. And get all systems ready for what’s on the other side. They almost definitely have an ambush waiting for us. I’m guessing that’s where the remaining nine or so squadrons of Sixth Fleet are waiting for us,” Skvanu said confidently. “Twelve Lesser Predator squadrons to twenty-six of ours. Doesn’t matter how many upgrades they have, we will defeat them, especially since the first three will be within railgun range. Get those gunnery crews and point defense computers ready.”
“Blinking in seventy seconds,” she announced. “Sixty-five seconds—” Suddenly, she stood up, “Eight Whiskers, our FTL communications are open again! Both Datsot and Gruccud have just responded to our last message!”
Skvanu spun around to face her. “That makes sense. Whatever device they used to stop our communications must have been on one of the ships that just blinked out. Is there any priority intelligence from either?”
“Yes! Datsot has an emergency transmission for us. It’s from Ten Whiskers Ditvish!”
“What is it?” Skvanu asked, his voice serious.
She began to read. “Lesser Predators have entered Datsot system in force. Nine squadrons spotted so far. They may attempt to engage our garrison force there… His guidance is that we return immediately to trap these aggressor ships, but leaves the decision up to you…”
Skvanu absorbed the information with shock. If those ships are really in Datsot, they must not be on the other side of wherever the Oengro is blinking. And with that context, this now smelled exactly like a planned trap.
He thought out loud. “This must be what the Lesser Predators planned from the start. If we chase, we have no idea what they have on the other side. There may be refueling ships. They may have already gotten away. By the Prophecy, they may even be sacrificing three squadrons to get us to blink through a singularity or anomaly. But wait… If we return to Datsot immediately, we might catch those squadrons split from the rest of their ships and cripple their fleet!”
Having made up his mind, he shouted urgently at the navigation station, “Navigation, hold the blink!”
“Halting the blink procedures.”
“A handful of ships have already completed the blink!” the computer officer reported, almost in a panic.
“Cease blink procedures! Fleet-wide, cease the blink!”
The order went out immediately, and it was a testament to the discipline of the Znosian Navy that most squadrons managed to stop the countdown just seconds before it went through.
“How many ships went through?” Skvanu asked urgently.
“We managed to stop most of our ships, Eight Whiskers. Only five combat ships from Squadron 6 went through.”
He sighed in relief. “Only the Prophecy can help them now… Turn us around. Let’s get back to Datsot.”

TRNS Nile

“I think we are in sufficiently deep space,” Captain Gregor Guerrero said to his crew. “Drop us out.”
“Yes, captain. Emergency drop-out in five… four… three… two… one… now.”
The ship shuddered and creaked as the emergency-stop was activated. The blink engine wound down, forcing the ship back into normal space.
Gregor turned to his navigation officer. “How far from Plaunsollib did we travel, in regular space?”
“Two months on their Alcubierre drives if they combat burn with all their fuel. Four if they plan on stopping,” she replied immediately. “They’d be going too fast to aerobrake anyway.”
“Good,” Guerrero said, gluing his eyes to his sensor board. Ships in FTL are difficult to detect, even on gravidar, but the state-of-the-art technology on the Nile gave them a few seconds of warning.
A few seconds later, the sensor officer’s voice cut through the tense silence. “I’ve spotted the Puppers in blink! All of them, tight formation. They’ll pass us in about fifteen seconds.”
Guerrero nodded his pleasure. “Good, let them pass. Tell me when they’re out of range.”
The seconds ticked by. “Ten… five… they’ve passed our position… and now they’re out of range.”
“Now, switch on the blink disruption field,” he ordered.
The hum of the ship’s ambient noise went up an octave, signaling maximum power drain as the ship’s thirstiest system kicked in.
Gregor looked at his information panel. “Full emissions control. EMCOM Alpha. Deploy the FTL jammer drone and then shut off our engines. If things go well, we’re about to be joined by half the fucking Bunny Navy in a minute.”
“Aye, Captain. EMCOM Alpha.” The rest of the crew nodded, working their controls with practiced competence.
“Jammer drone out. You think they’ve got wild weasels, captain?”
“Unlikely, but we take no chances. If they don’t…” He shrugged. “… we’ll just get our drone back later.”
A tense minute passed, then the sensor officer reported, “Captain, Znosian ships spotted on gravidar! Two… three… five in total… They’ve just been forced out of blink.”
“Five squadrons?”
“No, Captain, five ships.”
Gregor furrowed his brow, surprised, and took another glance at his console. “Only five ships?”
“Yes, sir.”
“Alright, keep the disruption field up, and analyze the drive signatures on them. Maybe one of them is this Skvanu guy we’re supposed to hit,” he speculated hopefully.
After half an hour, Guerrero finally called it quits. “No more guests are showing up. Looks like they must have wizened up at the last moment.”
“Aye, sir,” the executive officer said, shaking her head in disappointment as well. “It was a good plan. Could have stranded their whole fleet out here.”
“Well, bad luck— these things happen in war, Lieutenant. Don’t worry. We’ll get them next time. How are the guests we did get doing?”
“Out of blink fuel, as expected. They’ve been dumping cargo in an organized fashion. I think they’re planning to see if they can reach Plaunsollib with their subspace drives in a reasonable amount of time and call triple A.” Then, she asked, “Where do you think the rest run off to?”
“Probably Datsot,” Guerrero guessed. “Phone Sphinx and tell him he’s probably got the whole shit storm heading his way, ETA about a couple days. Get the estimates to him.”
“Yes, sir.”
“Now, we just need to silence the witnesses so we can use this trick again. Bridge to CIC: let’s keep it simple. One Kestrel for each of the targets. We’ll swiss-cheese them with railguns after. Just in case.”
“Aye, Captain. We’re not dropping off those TRO drones here, are we?”
“Nah. Too much work. No one is finding these guys ever again anyway.”

MNS Trassau

“I just got off a call with the Nile,” Loenda announced. “Looks like the Grass Eaters have discovered our ruse in the other system. The main enemy fleet is heading our way right this second.”
Speinfoent sighed, and suggested, “If we burn closer for just half a day more—”
“No more,” Loenda declared. “We are already risking nine squadrons coming this far into the Datsot system limit.”
“Alright,” Speinfoent agreed reluctantly. “We can still give them a present they won’t forget any time soon.”
“That, we will. That we will.” Loenda turned to her console. “All ships in Battlegroup 2, dump your payloads as quietly as you can. Then wait half an hour to change your vector and make your way to the system blink limit.”
“Yes, Battlegroup Commander.”

ZNS 1841

“Ten Whiskers, the Lesser Predators are turning around,” the computer officer declared, doing her best to hide her relief.
“What? Where are they heading now?” Ditvish asked, confounded.
“Towards the shortest path to the system blink limit, I think.”
“That’s it? They’re just leaving now?”
“Combat computer speculates that they might have seen that Eight Whiskers Skvanu is heading back to Datsot, so they are breaking off the attack,” the officer offered.
“That’s… not very Lesser Predator of them, but very logical,” he admitted. “They must have realized their plan failed and are now cutting their losses.”
He didn’t mention that his fleet was the one that came out behind, losing yet another precious supply convoy and then sending the whole combat fleet on a wild predator chase for nothing. That State Security goon might start to become a problem if he didn’t spin this well in his after-action report.
A few hours later, a foreboding feeling coloring his mood, he ordered, “Sensors, boost our radars towards where they changed vectors. I want to check to see if they dropped any drones or traps.”
“Yes, Ten Whiskers.”
The 1841 boosted its radar towards the direction, blaring out signals on maximum strength and—
“Incoming… missiles? Ten Whiskers, many missiles! Dozens! Over a hundred! They’re well within our minimum abort range!”
“By the Prophecy!” Ditvish exclaimed. “All ships, execute combat burn away from them! Countermeasures and fire counter-missiles, at the ready! Track those missiles!”
Fortunately, the garrison fleet was still in high readiness from before. Their engines were ready to light up to full acceleration immediately.
Unfortunately, the missiles were already close. In desperation, his ships began dumping their entire loads of radar chaffs and flares into space behind them as they maneuvered away from the threat. Counter-missiles sped out of their tubes towards their rear, relying on their motherships’ sensors and radars to find the tiny alien missiles for them to engage.
Quietly gliding through space towards the enemy on inertia inherited from their motherships was the sizable swarm of Terran-made missiles. Obsolete for military purpose in Sol but still produced for the civilian and gray market, they were an easy addition on the TRO’s shopping list. Vast quantities of them had found their way into various shell corporations and dead drops all over Sol, then onto hastily constructed exterior pylons on Sixth Fleet ships.
While they were indeed several times outside of the maximum effective range of the Znosian ships at launch, missiles technically did have unlimited ballistic ranges in space — if their enemies were not moving and they did not need to constantly fire their thrusters to adjust course. Relying on a short first burn and then inertia, they flew most of the way towards the stationary enemy fleet completely undetected. By the time they were spotted, it was too late; the Znosians were well within their effective ranges.
Their intelligence chips might not have been super-Terran state-of-the-art computers, but the Pigeons had no problem realizing that they were discovered. They had been tracking the enemy targets using passive infrared sensors that did not alert enemy threat sensors to their presence. But the second that the targets started dropping flares to blind them, they activated their primitive late twenty-first century radars and homed in onto the priority targets they’d been given. Their main thrusters began their burns, adjusting their vectors to intercept the now-finally-moving enemy ships.
Then, they saw the incoming counter-missiles — fired by the enemies sporadically, obviously in panic.
The makers of the Pigeons might not have bothered to include next-generation electronic dazzlers on them, but penetration aid on missiles had been standard in Terran warfare for a century. They littered the space they were in with chaff and their own bright flares, coordinating with the other missiles in the area with short range laser communication to ensure that none in the swarm would confuse or disrupt each other.
The Znosian counter-missiles were certainly confused and disrupted though. Many veered off into phantom signals. Some lucky ones did manage to find their targets. When a few of their comrades dropped off their impromptu mesh net, the Pigeons constantly corresponded with laser communications to re-prioritize their targeting.
At the top of the list was the fattest, easiest target of them all: the enemy flagship 1841.
Seconds before impact, the missiles finalized their targets, and they spent every drop and fume of their remaining fuel on terminal maneuvers.
The Znosians’ close in weapons systems had milliseconds to engage the incoming threats. They performed admirably… for trying to deal with this unknown alien threat for the first time. A couple dozen more missiles were plucked out of space, but it was not enough.
Not nearly.
The rest slipped through the net.
Miraculously, the 1841 managed to survive initially. Despite it being the primary focus of the Pigeon mob, the other ships did their best to shield its most vital components in its rear with their own point defense. And the Pigeons — like most missiles of its era — were loaded with just enough firepower to destroy much smaller Terran ships. The larger hulls of the Znosian ships gave their obsolete mid-century intelligence chips a slightly more interesting exercise in module identification and targeting.
The massive Thorn-class battleship took fourteen hits to varying systems that the missiles visually identified as “that looks pretty important” on their final approach: its primary missile and gun tubes were trashed, venting atmosphere to space in those compartments. A proximity hit near the stern took out four of its eight massive main thrusters and several system modules at the rear of the ship. And perhaps worst of all, one Pigeon managed to zero in on its vulnerable front bridge, the explosion emptying its contents and occupants into vacuum.
Luckily for Ten Whiskers Ditvish, none of them hit the armored flag bridge where he was in the belly of the ship, vindicating the Znosian Navy’s practice of separating the two for redundancy.
Nonetheless, Ditvish fell to the ground as the simultaneous impacts temporarily overloaded the inertial compensators and shook the ship to its core. Sparks flew around him, and he smelled a pungent stink as the automated fire suppression systems kicked in to save as much as they possibly could.
He slowly climbed to his feet and looked at the scene around him. A sensor officer was spraying foam at a small fire with a handheld device, successfully extinguishing it in seconds. Several other of his crew were recovering and returning to their stations with remarkable calm. After all, they were elite, well-trained spacers and officers of the Znosian Navy.
Ditvish did the same, propping himself back into his command chair with slight effort. He operated his console in a concussed daze. One glance at the status board told him that the 1841 was a write-off. It wasn’t going to be combat effective ever again. At least its life pod systems were working, and he watched in relief as dozens then hundreds of crew members in the damaged sections of the ship climbed into theirs and ejected into the relative safety of vacuum.
He checked up on the other ships: several others were hit. Six had outright detonated: no survivors nor signals came from them. Two were irreparably damaged, their remaining crews also abandoning their ships in an orderly fashion. And another six had visible fires or scorch marks on their damaged hulls, but those crews were still valiantly fighting to keep their ships alive.
Ditvish noticed that the missile didn’t go for all his ships, just the ones on the outer edge on his sensor board— wait, the missiles—
To his horror, several more dozen missiles they’d detected were still active, and they were going for—
He looked at his computer officer’s station and yelled, “We have to warn them!”
She yelled something back at him, but he realized that he couldn’t hear her. Hitting the floor must have injured his hearing organs. He yelled again, hoping that she could still hear. “Warn the orbital support fleet! The logistics and fire support ships! Evasive maneuvers and take cover in the atmosphere!”
Her lips moved again. He got out of his chair and stumbled over to her in a daze, trying to hear what she was saying.
She was saying something.
It must be important.
“… not reach them. Our communication array… destroyed! Ten Whiskers, we need to get… We don’t have much time!”
Ditvish finally understood her from reading her lips. He didn’t respond. Just numbly watched the planetary battlemap of Datsot on the main screen.
It didn’t take long. They were completely defenseless.
The remaining missiles plucked every last orbital fire support and logistics transport ship out of the skies of Datsot. Most detonated; a few left behind trails of black smoke as they sank uncontrollably towards the planet’s surface.
Then, Ditvish’s hind legs gave out and he crumpled onto the bridge floor.
He was dimly aware of one of his subordinates dragging him towards the bridge escape pod as he blacked out.

MNS Trassau

“Don’t worry, Speinfoent,” Loenda said, putting her paws around the junior commander looking glumly at the image of Datsot retreating from their view as the rest of the bridge cheered the better-than-anticipated success of the raid. “We’ll come back, and next time, we’re coming back for everything.”
“That we will, Loenda. That we will.”

Meta

There is no research that shows the effectiveness of trust falls for building trust in a team and plenty of research showing that falling backwards from a full standing position without adequate bracing or padding can lead to serious brain, spinal, and back injuries.
Coercion or retaliation against Malgeir employees who refuse to participate in trust fall exercises may be considered investigable or actionable violations of workplace safety regulations by the Republic Office of Occupational Safety or anti-discrimination regulations by the Office of Equal Opportunity.
Whistleblowers are entitled to up to 25% of monetary penalties recovered. If you see something, say something.
Previous
Chapter 53: Apostasy
submitted by Spooker0 to HFY [link] [comments]


2024.05.15 18:43 cdeck272727 How to delete 28k or so pages crawled but not idexed.

I have 28k or so pages crawled but not idexed.
I have since updated my site and cleandup a lot of the on page and URL's. I have about 11k pages crawled and indexed. But the old url's have about 28k pages still showing as crawled not indexed.
I now want to delete or block google from those old URL's as they dont exist anymore.
Questions:
Can I just delete them from my sitemap?
Do I need to manually block with robot txt?
submitted by cdeck272727 to SEO [link] [comments]


2024.05.15 17:27 daisho87 [NM] James Bond Aston Martin DB5 (10262) - 110spots @ $3ea

DO NOT ADD A COMMENT TO YOUR PAYMENT; IF YOU HAVE ANY QUESTIONS, PLEASE MESSAGE ME BEFORE PAYING;
Item Name Set Number: James Bond Aston Martin DB5 (10262)
Lego Price: $187.34
Shipping: 6lbs, 20 x 13 x 6, 99016 to 33136 UPS Ground = $32.86
Raffle Total/Spots: $220 / 110 spots @ $2 each
Price justification: BL 6mo INT
Call spots: Yes
Spot limit per person: No
Duration of spot limit: N/A
Location(Country): USA
Will ship international: US / Canada only - Canada / HI / AK / APO pay difference
Timestamp pics: https://imgur.com/a/YloPKAR
Description: Better not catch you in my whip, or I'm launching your terrorist ass out my ejector seat! Hope you don't mind minor box damage on a great set with fun play features. Box has some crunching - see pics.
Payment required w/in 15 minutes of raffle filling. 10 minutes for drama.
DO NOT ADD A COMMENT TO YOUR PAYMENT; IF YOU HAVE ANY QUESTIONS, PLEASE MESSAGE ME BEFORE PAYING;

PayPal Info: https://www.paypal.me
Cash App Info: https://cash.app

Tip BlobAndHisBoy
Number of vacant slots: 14
Number of unpaid users: 8
Number of unpaid slots: 41
This slot list is created and updated by The EDC Raffle Tool by BlobAndHisBoy.
1 Theserialchiller-
2
3 Codename-KND
4 Fluffy-Run-6861 PAID
5 Fluffy-Run-6861 PAID
6 ssj3dvp11
7 wraps_ PAID
8 r1955 PAID
9 porvis
10 porvis
11 ColossusaurusRex8 PAID
12 r1955 PAID
13 robob280
14 wraps_ PAID
15 nmde305
16 ColossusaurusRex8 PAID
17 ColossusaurusRex8 PAID
18 wraps_ PAID
19 Theserialchiller-
20
21 r1955 PAID
22 r1955 PAID
23 ssj3dvp11
24 wraps_ PAID
25 r1955 PAID
26 MudIsland PAID
27 mister_panduhh PAID
28 DTURPLESMITH
29 Theserialchiller-
30 robob280
31 robob280
32
33 DTURPLESMITH
34 ColossusaurusRex8 PAID
35
36 ColossusaurusRex8 PAID
37 DTURPLESMITH
38 r1955 PAID
39 wraps_ PAID
40 Fluffy-Run-6861
41 Fluffy-Run-6861
42 Fluffy-Run-6861
43 DTURPLESMITH
44
45 wraps_ PAID
46 Fluffy-Run-6861 PAID
47 r1955 PAID
48 ColossusaurusRex8 PAID
49 Fluffy-Run-6861
50 r1955 PAID
51
52 Fluffy-Run-6861
53 ColossusaurusRex8 PAID
54 Fluffy-Run-6861
55 DTURPLESMITH
56 mudisland PAID
57 ColossusaurusRex8 PAID
58 wraps_ PAID
59
60 DTURPLESMITH
61 Codename-KND
62
63 erme525 PAID
64 DTURPLESMITH
65 DTURPLESMITH
66 ColossusaurusRex8 PAID
67 Codename-KND
68 r1955 PAID
69 r1955 PAID
70 ColossusaurusRex8 PAID
71 wraps_ PAID
72 r1955 PAID
73 ssj3dvp11
74 MudIsland PAID
75 Fluffy-Run-6861
76 r1955 PAID
77 r1955 PAID
78 ssj3dvp11
79 r1955 PAID
80 ColossusaurusRex8 PAID
81
82 mister_panduhh PAID
83 r1955 PAID
84 ColossusaurusRex8 PAID
85
86 r1955 PAID
87
88 ssj3dvp11
89 Fluffy-Run-6861
90 DTURPLESMITH
91 erme525 PAID
92 wraps_ PAID
93 r1955 PAID
94 porvis
95
96 wraps_ PAID
97 ColossusaurusRex8 PAID
98 DTURPLESMITH
99 ColossusaurusRex8 PAID
100 Codename-KND
101
102
103 DTURPLESMITH
104 Codename-KND
105 ColossusaurusRex8 PAID
106 ColossusaurusRex8 PAID
107 ColossusaurusRex8 PAID
108 porvis
109 wraps_ PAID
110 porvis

submitted by daisho87 to lego_raffles [link] [comments]


2024.05.15 16:17 Jakatagarin Torch refusing to be installed, Automatic1111 not working

I'm going insane. I'm trying to make this work since 4 hours already, and the more "fixes" from the internet I use the more errors I get. I tried everything to make my Automatic1111 run, dispite it working fine today's morning, without making any changes to the program, it suddenly stopped working. I get all sorts of errors, mostly connected with Torch or pip.
C:\AI4\AI6>git pull fatal: not a git repository (or any of the parent directories): .git Collecting torch==2.1.2 Using cached (2473.9 MB) Collecting torchvision==0.16.2 Using cached (5.6 MB) Requirement already satisfied: typing-extensions in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (4.11.0) Requirement already satisfied: jinja2 in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.1.4) Requirement already satisfied: sympy in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (1.12) Requirement already satisfied: fsspec in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (2024.3.1) Requirement already satisfied: filelock in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.14.0) Requirement already satisfied: networkx in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.3) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (10.3.0) Requirement already satisfied: numpy in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (1.26.4) Requirement already satisfied: requests in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (2.31.0) Requirement already satisfied: MarkupSafe>=2.0 in c:\ai4\ai6\venv\lib\site-packages (from jinja2->torch==2.1.2) (2.1.5) Requirement already satisfied: charset-normalizer<4,>=2 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (3.7) Requirement already satisfied: urllib3<3,>=1.21.1 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (2.2.1) Requirement already satisfied: certifi>=2017.4.17 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (2024.2.2) Requirement already satisfied: mpmath>=0.19 in c:\ai4\ai6\venv\lib\site-packages (from sympy->torch==2.1.2) (1.3.0) Installing collected packages: torch, torchvision Traceback (most recent call last): File "C:\AI4\AI6\launch.py", line 48, in  main() File "C:\AI4\AI6\launch.py", line 39, in main prepare_environment() File "C:\AI4\AI6\modules\launch_utils.py", line 380, in prepare_environment run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) File "C:\AI4\AI6\modules\launch_utils.py", line 115, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install torch. Command: "C:\AI4\AI6\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url Error code: 3221225477https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whlhttps://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whlhttps://download.pytorch.org/whl/cu121 
I've tried reinstalling everything, both Git and python, I've manually copied torch files into \venv\Scripts, tried to install several new automatic1111 in different locations or disks, download new pips, change startup command ARGs into things people recommended. Nothing works.
What's even more frustrating, is that on webui-user.bat startup, you do not even get errors in a consistent manner - sometimes without doing any changes they display one thing, sometimes they display something totally different. After finishing this post, I've clicked it once more and I get a different message:
C:\AI4\AI6>git pull fatal: not a git repository (or any of the parent directories): .git venv "C:\AI4\AI6\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.9.3 Commit hash:  Installing torch and torchvision Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121 Requirement already satisfied: torch==2.1.2 in c:\ai4\ai6\venv\lib\site-packages (2.1.2+cu121) Collecting torchvision==0.16.2 Using cached (5.6 MB) Requirement already satisfied: filelock in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.14.0) Requirement already satisfied: networkx in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.3) Requirement already satisfied: jinja2 in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (3.1.4) Requirement already satisfied: sympy in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (1.12) Requirement already satisfied: typing-extensions in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (4.11.0) Requirement already satisfied: fsspec in c:\ai4\ai6\venv\lib\site-packages (from torch==2.1.2) (2024.3.1) Requirement already satisfied: requests in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (2.31.0) Requirement already satisfied: numpy in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (1.26.4) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in c:\ai4\ai6\venv\lib\site-packages (from torchvision==0.16.2) (10.3.0) Requirement already satisfied: MarkupSafe>=2.0 in c:\ai4\ai6\venv\lib\site-packages (from jinja2->torch==2.1.2) (2.1.5) Requirement already satisfied: certifi>=2017.4.17 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (2024.2.2) Requirement already satisfied: urllib3<3,>=1.21.1 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (2.2.1) Requirement already satisfied: idna<4,>=2.5 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (3.7) Requirement already satisfied: charset-normalizer<4,>=2 in c:\ai4\ai6\venv\lib\site-packages (from requests->torchvision==0.16.2) (3.3.2) Requirement already satisfied: mpmath>=0.19 in c:\ai4\ai6\venv\lib\site-packages (from sympy->torch==2.1.2) (1.3.0) Installing collected packages: torchvision Successfully installed torchvision-0.16.2+cu121 [notice] A new release of pip available: 22.2.1 -> 24.0 [notice] To update, run: C:\AI4\AI6\venv\Scripts\python.exe -m pip install --upgrade pip Traceback (most recent call last): File "C:\AI4\AI6\launch.py", line 48, in  main() File "C:\AI4\AI6\launch.py", line 39, in main prepare_environment() File "C:\AI4\AI6\modules\launch_utils.py", line 386, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this checkhttps://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl 
When updating pip, there is a message that 24 version is already installed. "Requirement already satisfied" After adding --skip-torch-cuda-test line I get this:
C:\AI4\AI6>git pull fatal: not a git repository (or any of the parent directories): .git venv "C:\AI4\AI6\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.9.3 Commit hash:  Installing clip Traceback (most recent call last): File "C:\AI4\AI6\launch.py", line 48, in  main() File "C:\AI4\AI6\launch.py", line 39, in main prepare_environment() File "C:\AI4\AI6\modules\launch_utils.py", line 393, in prepare_environment run_pip(f"install {clip_package}", "clip") File "C:\AI4\AI6\modules\launch_utils.py", line 143, in run_pip return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live) File "C:\AI4\AI6\modules\launch_utils.py", line 115, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install clip. Command: "C:\AI4\AI6\venv\Scripts\python.exe" -m pip install --prefer-binary Error code: 3221225477https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip 
And again, another click without making changes to anything: Please help.
C:\AI4\AI6>git pull fatal: not a git repository (or any of the parent directories): .git venv "C:\AI4\AI6\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.9.3 Commit hash:  Installing clip Traceback (most recent call last): File "C:\AI4\AI6\launch.py", line 48, in  main() File "C:\AI4\AI6\launch.py", line 39, in main prepare_environment() File "C:\AI4\AI6\modules\launch_utils.py", line 393, in prepare_environment run_pip(f"install {clip_package}", "clip") File "C:\AI4\AI6\modules\launch_utils.py", line 143, in run_pip return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live) File "C:\AI4\AI6\modules\launch_utils.py", line 115, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install clip. Command: "C:\AI4\AI6\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary Error code: 1 stdout: Collecting https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip Downloading https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip (4.3 MB) ---------------------------------------- 4.3/4. 3 MB 11.0 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Installing backend dependencies: started Installing backend dependencies: finished with status 'error' stderr: error: subprocess-exited-with-error pip subprocess to install backend dependencies did not run successfully. exit code: 3221225477 [0 lines of output] [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error pip subprocess to install backend dependencies did not run successfully. exit code: 3221225477 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. 
submitted by Jakatagarin to StableDiffusion [link] [comments]


2024.05.15 16:15 saint_leonard .... automate scraping wikipedia info box specifically and print the data using python for any wiki page?

howdy i am trying to scrape all the Links of a large wikpedia page from the "List of Towns and Gemeinden in Bayern" on Wikipedia using python. The trouble is that I cannot figure out how to export all of the links containing the words "/wiki/" to my CSV file. I am used to Python a bit but some things are still kinda of foreign to me. Any ideas? Here is what I have so far...
see the main page: https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern
and the subpage with the infobox: https://de.wikipedia.org/wiki/Abenberg
and how i gather data:
import pandas
urlpage = 'https://de.wikipedia.org/wiki/Abenberg'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
which runs perfectly see the output:
Basisdaten Basisdaten
Koordinaten: 49° 15′ N, 10° 58′ OKoordinaten: 49° 15′ N, 10° 58′ O
Bundesland: Bayern
Regierungsbezirk: Mittelfranken
Landkreis: Roth
Höhe: 414 m ü. NHN
Fläche: 48,41 km2
Einwohner: 5607 (31. Dez. 2022)[1]
Bevölkerungsdichte: 116 Einwohner je km2
Postleitzahl: 91183
Vorwahl: 09178
Kfz-Kennzeichen: RH, HIP
Gemeindeschlüssel: 09 5 76 111
LOCODE: ABR
Stadtgliederung: 14 Gemeindeteile
Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg
Website: www.abenberg.de
Erste Bürgermeisterin: Susanne König (parteilos)
Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth
now the page above - that has got a lot of links - with names of towns and so on:
import requests
from bs4 import BeautifulSoup
def fetch_city_links(list_url):
response = requests.get(list_url)
if response.status_code != 200:
print(f"Failed to retrieve the page: {list_url}")
return []
soup = BeautifulSoup(response.content, 'html.parser')
divs = soup.find_all('div', class_='column-multiple')
href_list = []
for div in divs:
li_items = div.find_all('li')
for li in li_items:
a_tags = li.find_all('a', href=True)
href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags])
return href_list
def main():
list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern'
city_links = fetch_city_links(list_url)
# Print all the city links
for link in city_links:
print(link)
if __name__ == "__main__":
main()
this gives back the following: i have extracte them - here are the urls
https://de.wikipedia.org/wiki/Amberg
https://de.wikipedia.org/wiki/Ansbach
https://de.wikipedia.org/wiki/Aschaffenburg
https://de.wikipedia.org/wiki/Augsburg
https://de.wikipedia.org/wiki/Bamberg
https://de.wikipedia.org/wiki/Bayreuth
https://de.wikipedia.org/wiki/Coburg
https://de.wikipedia.org/wiki/Erlangen
https://de.wikipedia.org/wiki/F%C3%BCrth
https://de.wikipedia.org/wiki/Hof_(Saale)
https://de.wikipedia.org/wiki/Ingolstadt
https://de.wikipedia.org/wiki/Kaufbeuren
https://de.wikipedia.org/wiki/Kempten_(Allg%C3%A4u)
https://de.wikipedia.org/wiki/Landshut
https://de.wikipedia.org/wiki/Memmingen
https://de.wikipedia.org/wiki/M%C3%BCnchen
https://de.wikipedia.org/wiki/N%C3%BCrnberg
https://de.wikipedia.org/wiki/Passau
now i am wanting to add them all to the site:
and feth the data of all infoboxes - - is this possible:
import pandas urlpage = 'https://de.wikipedia.org/wiki/Abenberg' data = pandas.read_html(urlpage)[0] null = data.isnull() for x in range(len(data)): first = data.iloc[x][0] second = data.iloc[x][1] if not null.iloc[x][1] else "" print(first,second,"\n") 
and subsequently add all to a dataframe - store it to a csv-file!?
import requests from bs4 import BeautifulSoup import pandas as pd def fetch_city_links(list_url): response = requests.get(list_url) if response.status_code != 200: print(f"Failed to retrieve the page: {list_url}") return [] soup = BeautifulSoup(response.content, 'html.parser') divs = soup.find_all('div', class_='column-multiple') href_list = [] for div in divs: li_items = div.find_all('li') for li in li_items: a_tags = li.find_all('a', href=True) href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags]) return href_list def scrape_infobox(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') infobox = soup.find('table', {'class': 'infobox'}) if not infobox: print(f"No infobox found on this page: {url}") return None data = {} for row in infobox.find_all('tr'): header = row.find('th') value = row.find('td') if header and value: data[header.get_text(" ", strip=True)] = value.get_text(" ", strip=True) return data def main(): list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern' city_links = fetch_city_links(list_url) all_data = [] for link in city_links: print(f"Scraping {link}") infobox_data = scrape_infobox(link) if infobox_data: infobox_data['URL'] = link all_data.append(infobox_data) df = pd.DataFrame(all_data) df.to_csv('wikipedia_infoboxes.csv', index=False) if __name__ == "__main__": main() 
but well - it does not give back a result
submitted by saint_leonard to learnprogramming [link] [comments]


2024.05.15 16:14 saint_leonard automate scraping wikipedia info box specifically and print the data using python for any wiki page?

howdy i am trying to scrape all the Links of a large wikpedia page from the "List of Towns and Gemeinden in Bayern" on Wikipedia using python. The trouble is that I cannot figure out how to export all of the links containing the words "/wiki/" to my CSV file. I am used to Python a bit but some things are still kinda of foreign to me. Any ideas? Here is what I have so far...
see the main page: https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern
and the subpage with the infobox: https://de.wikipedia.org/wiki/Abenberg
and how i gather data:
import pandas
urlpage = 'https://de.wikipedia.org/wiki/Abenberg'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")
which runs perfectly see the output:
Basisdaten Basisdaten
Koordinaten: 49° 15′ N, 10° 58′ OKoordinaten: 49° 15′ N, 10° 58′ O
Bundesland: Bayern
Regierungsbezirk: Mittelfranken
Landkreis: Roth
Höhe: 414 m ü. NHN
Fläche: 48,41 km2
Einwohner: 5607 (31. Dez. 2022)[1]
Bevölkerungsdichte: 116 Einwohner je km2
Postleitzahl: 91183
Vorwahl: 09178
Kfz-Kennzeichen: RH, HIP
Gemeindeschlüssel: 09 5 76 111
LOCODE: ABR
Stadtgliederung: 14 Gemeindeteile
Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg
Website: www.abenberg.de
Erste Bürgermeisterin: Susanne König (parteilos)
Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth
now the page above - that has got a lot of links - with names of towns and so on:
import requests
from bs4 import BeautifulSoup
def fetch_city_links(list_url):
response = requests.get(list_url)
if response.status_code != 200:
print(f"Failed to retrieve the page: {list_url}")
return []
soup = BeautifulSoup(response.content, 'html.parser')
divs = soup.find_all('div', class_='column-multiple')
href_list = []
for div in divs:
li_items = div.find_all('li')
for li in li_items:
a_tags = li.find_all('a', href=True)
href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags])
return href_list
def main():
list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern'
city_links = fetch_city_links(list_url)
# Print all the city links
for link in city_links:
print(link)
if __name__ == "__main__":
main()
this gives back the following: i have extracte them - here are the urls
https://de.wikipedia.org/wiki/Amberg
https://de.wikipedia.org/wiki/Ansbach
https://de.wikipedia.org/wiki/Aschaffenburg
https://de.wikipedia.org/wiki/Augsburg
https://de.wikipedia.org/wiki/Bamberg
https://de.wikipedia.org/wiki/Bayreuth
https://de.wikipedia.org/wiki/Coburg
https://de.wikipedia.org/wiki/Erlangen
https://de.wikipedia.org/wiki/F%C3%BCrth
https://de.wikipedia.org/wiki/Hof_(Saale)
https://de.wikipedia.org/wiki/Ingolstadt
https://de.wikipedia.org/wiki/Kaufbeuren
https://de.wikipedia.org/wiki/Kempten_(Allg%C3%A4u)
https://de.wikipedia.org/wiki/Landshut
https://de.wikipedia.org/wiki/Memmingen
https://de.wikipedia.org/wiki/M%C3%BCnchen
https://de.wikipedia.org/wiki/N%C3%BCrnberg
https://de.wikipedia.org/wiki/Passau
now i am wanting to add them all to the site:
and feth the data of all infoboxes - - is this possible:
import pandas urlpage = 'https://de.wikipedia.org/wiki/Abenberg' data = pandas.read_html(urlpage)[0] null = data.isnull() for x in range(len(data)): first = data.iloc[x][0] second = data.iloc[x][1] if not null.iloc[x][1] else "" print(first,second,"\n") 
and subsequently add all to a dataframe - store it to a csv-file!?
import requests from bs4 import BeautifulSoup import pandas as pd def fetch_city_links(list_url): response = requests.get(list_url) if response.status_code != 200: print(f"Failed to retrieve the page: {list_url}") return [] soup = BeautifulSoup(response.content, 'html.parser') divs = soup.find_all('div', class_='column-multiple') href_list = [] for div in divs: li_items = div.find_all('li') for li in li_items: a_tags = li.find_all('a', href=True) href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags]) return href_list def scrape_infobox(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') infobox = soup.find('table', {'class': 'infobox'}) if not infobox: print(f"No infobox found on this page: {url}") return None data = {} for row in infobox.find_all('tr'): header = row.find('th') value = row.find('td') if header and value: data[header.get_text(" ", strip=True)] = value.get_text(" ", strip=True) return data def main(): list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern' city_links = fetch_city_links(list_url) all_data = [] for link in city_links: print(f"Scraping {link}") infobox_data = scrape_infobox(link) if infobox_data: infobox_data['URL'] = link all_data.append(infobox_data) df = pd.DataFrame(all_data) df.to_csv('wikipedia_infoboxes.csv', index=False) if __name__ == "__main__": main() 
but well - it does not give back a result
submitted by saint_leonard to learnpython [link] [comments]


2024.05.15 15:55 Turbulent-Daikon144 Spammy backlinks

We’ve recently run into an issue with our website where we have back links to bogus urls. So a lot of our search console results are for things totally unrelated to us - such as chase sapphire member agreements, prenuptial agreements, etc. The people who created our website and does the maintenance on it doesn’t think it’s an issue. For a little while some of these pages were indexed. I’ve asked for them to be removed via search console, and am working on my disavow list, but it keeps growing. Getting literally no help or insight from our web people and it’s becoming quite time consuming. Is it really a non issue if they come back 404? How can it be stopped? Any other suggestions or insights?
submitted by Turbulent-Daikon144 to SEO [link] [comments]


2024.05.15 15:29 Tycho_Jissard MS-ISAC CYBERSECURITY ADVISORY - Multiple Vulnerabilities in Mozilla Products Could Allow for Arbitrary Code Execution - PATCH: NOW

MS-ISAC CYBERSECURITY ADVISORY
MS-ISAC ADVISORY NUMBER: 2024-056
DATE(S) ISSUED: 05/14/2024
SUBJECT: Multiple Vulnerabilities in Mozilla Products Could Allow for Arbitrary Code Execution
OVERVIEW: Multiple vulnerabilities have been discovered in Mozilla Products, the most severe of which could allow for arbitrary code execution.
Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
THREAT INTELLIGENCE: There are no reports that these vulnerabilities are being exploited in the wild
SYSTEMS AFFECTED:
RISK: Government:
Businesses:
Home users: Low
TECHNICAL SUMMARY: Multiple vulnerabilities have been discovered in Mozilla Products, the most severe of which could allow for arbitrary code execution. Details of the most critical vulnerabilities are as follows:
Tactic: Initial Access (TA0001):
Technique: Drive-by Compromise (T1189):
Additional lower severity vulnerabilities include:
Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
RECOMMENDATIONS:
We recommend the following actions be taken:
REFERENCES:
Mozilla: https://www.mozilla.org/en-US/security/advisories/ https://www.mozilla.org/en-US/security/advisories/mfsa2024-21/ https://www.mozilla.org/en-US/security/advisories/mfsa2024-22/ https://www.mozilla.org/en-US/security/advisories/mfsa2024-23/
CVE: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4367 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4764 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4765 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4766 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4767 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4768 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4769 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4770 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4771 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4772 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4773 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4774 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4775 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4776 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4777 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-4778
submitted by Tycho_Jissard to k12cybersecurity [link] [comments]


2024.05.15 15:26 uh-despicableme Cuda unavailable in tmux but available normally

I use tmux version 3.0a The details of my GPU are : NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2
I've been trying to run yolo v6 code from ultralytics to train the yolo v6 model on the COCO dataset in vscode:
from ultralytics import YOLO import torch
print(torch.cuda.is_available()) device=torch.device('cuda:2')
model = YOLO('yolov6n.yaml').to(device)
model.info()
model.train(data='coco.yaml', epochs=5, imgsz=640) metrics = model.val()
results = model('/home/objectdetection/sfoldeyolo6/bus.jpg')
Here, when I tried to run this on tmux, I'm getting the following error:-
RuntimeError: The NVIDIA driver on your system is too old (found version 11020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
But when I'm running it without tmux, then it is running perfectly fine on my GPU
Why is this happening?
submitted by uh-despicableme to tmux [link] [comments]


2024.05.15 14:48 PunnyHeals Misrepresentation of house price data by Realtor.ca

Misrepresentation of house price data by Realtor.ca
TL;DR:
  1. Realtor.ca claims the average sale price of a house in Fredericton, NB is $288,300. My own calculations point to the average being approximately $543,878.
  2. Realtor.ca most likely calculates it average house price using the average for houses and vacant land. My average for houses and land was $288,540, only a $240 difference, making this the most likely explanation.
  3. Realtor misrepresents graphs and averages through market capture, pay gating, and could be violating the Competition Act.
**Background**
I have been looking to buy a house for the past several years in the Fredericton area and have been checking the online listings regularly through Realtor.ca since it is the most common real estate listing website used in New Brunswick. What I liked about Realtor.ca was its ability to provide the average sell price for a house every month with graphs that showed the average sell price for a house in Fredericton for the past 12 months and 10 years. Looking for a house for an several years, I felt that I had a good idea of the market conditions and price ranges. My anecdotal evidence was that the average house price was much higher than Realtor.ca’s estimate of 288,300. I wondered if my anecdotal evidence could be supported by data.
The objective of this report is to collect list price data from all available listings within the Fredericton area. Once collected, I can take the average price and see if it matches the average price shown by Realtor.ca.
**Average/Median Methodology**
When you use Realtor.ca, you can filter results by the property type. There are six property type categories: Residential (single family home), condo/strata, vacant land, recreational, multi-family, and agriculture. For each of these property types, the asking price and address were copied into an Excel file. The data was collected on May 10, 2024, and included all listings within Fredericton; duplicate listings were removed.
Once all data was collected, the average and median for each property type was calculated (Table 1). I compared my calculated average to the Realtor.ca average to determine if my anecdotal evidence of thinking the average house price was higher than what Realtor.ca said was justified.
**Results**
There were 107 listings for residential houses (referred simply as “house” in this report), 245 listings for vacant land, 5 listings for recreational, 7 listings for multi-family, 2 listings for agriculture, and 10 listings for condos (Figure 1).
The average listing price was $543,878 for houses, $177,026 for land, $227,080 for recreation, $826,100 for multi-family, $829,450 for agriculture, and $317,410 for condos. The median listing price was $474,900 for houses, $64,900 for land, $229,900 for recreation, $799,000 for multi-family, $829,450 for agriculture, and $289,900 for condos (Table 1).
**Realtor.ca MLS System Average House Price Claim**
When you search for “houses for sale in Fredericton, NB”, you will see the top search results show Realtor.ca. This is not uncommon since Realtor.ca and its Multiple Listing Service (MLS) have the highest number of listings of any other online real estate listing service for the Fredericton, NB, area. Having most real estate listings concentrated on one system can provide users with a general idea of greater market conditions beyond individual listings, such as averages and trends for cities. Realtor.ca provides this data in the form of “Market Price (CAD)” price trends for the past 12 months, and price trends for the past 10 years (Figure 2). These figures are prominently displayed at the end of the first page of the Fredericton real estate listings (URL: https://www.realtor.ca/nb/fredericton/real-estate).
This leads us to the first claim by the Realtor.ca MLS system claim and our initial objective of this report.
Claim: The average market price in Fredericton sits at $288,300 as of May, 2024.
Analysis: When a user views these figures, it is a safe assumption that when a price is displayed, the user is inclined to believe that “Market Price (CAD)” is the average house price in Fredericton. This is further reinforced if the user reads the description above the figures which states:
“Use our home price trends to better gauge local market conditions and plan your next move. The graphs below show benchmark or average prices of homes sold in the area. Data generated by MLS® Systems and the MLS® Home Price Index (HPI) — Canada’s most advanced tool to gauge local home price levels and trends.”
This small paragraph specifically states, “The graphs below show benchmark or average prices of homes sold in the area.” Based off the graphs and their statement, we can safely interpret that Realtor.ca is explicitly saying that the average home price in Fredericton, NB, currently sits at $288,300; leaving no room for interpretation on how the data can be viewed. The reason I wanted to be explicitly clear on this thought process is that if you look back at the results section of this paper (Table 1) and see that the calculated average of all house listings was $543,878, it represents an 88.65% difference. A couple assumptions that could explain this difference are:
  1. The listings used in the analysis are only a snapshot in time and could not represent an accurate or precise representation of the monthly price average.
  2. Houses that were listed below the average could be selling more quickly, giving us a skewed data set that is not representative of all listings that have been posted.
  3. Realtor.ca gives the average sell price for houses in Fredericton and not the average listing price. There could be a large discrepancy between sell price and list price, resulting in my calculated average being inflated.
The three assumptions made above introduce bias into my conclusions, but given the magnitude of those differences, it could be reasonable to assume there might be an alternative reason causing these discrepancies.
Since there is such a large discrepancy in my calculated average and the average from Realtor.ca, I expanded my analysis to other categories. I combined my residential house data set with the other five property types to see if it would alter our initial average and how close it would come to the calculated Realtor.ca average (Table 2). Realtor.ca claims the average house price in Fredericton was $288,300, which seems to be closest to my calculated average for the combination of house and land listings. With the addition of these combinations, it suggests that Realtor.ca calculates average housing price using houses and land listings.
Realtor.ca MLS’s claim of the average house price in Fredericton, NB being $288,300 is a misrepresentation of the true market value and conditions. If a company were to calculate averages of an entire real estate market within an area, why would they only include house and land and not the other 4 categories?
**Misleading Representations by Realtor.ca**
The conclusions made from my analysis were made with plenty of explanations and assumptions. Given that the MLS system is a pay gated system, and their patented house price index algorithms are private, I feel it is reasonable to assume that my data is closer to true market prices. This leads us to the next question, if my data isn’t correct, why are the figures, calculations, and methodology misleading users on market conditions? The average user is not going to spend a significant amount of time manually collecting data and putting it into Excel to double check Realtor.ca. The company is the largest multiple listing system used in New Brunswick and holding that status comes with some form of implicit trust that the public holds for information it publishes. In this section, I will lay out sections and guidelines from the Competition Act and why I believe that Realtor.ca is violating the Act.
**Competition Act**
For the below, I will be using the most updated version of the Competition Act R.S.C., 1985, c. C-34, last amended on December 15, 2023 (https://laws.justice.gc.ca/eng/acts/C-34/page-1.html) and the “Application of the Competition Act to Representations on the Internet” published by Competition Bureau Canada (https://publications.gc.ca/collections/collection\_2010/ic/Iu54-1-2009-eng.pdf)
*Section 2.2, Paragraph 4 of the Application of the Competition Act to Representations on the Internet*
“Businesses should not assume that consumers read an entire Web site, just as they do not read every word on a printed page. Accordingly, information required to be communicated to consumers to ensure that a representation does not create a false or misleading impression should be presented in such a fashion as to make it noticeable and likely to be read.”
Explanation: Section 2.2 applies to the average house price and accompanying figures (Figure 2). Realtor.ca shows the average house price in text and graph form but does not disclose that these are house and land price average if my calculations are accurate.
*Section 4.1, Paragraph 1 of the Application of the Competition Act to Representations on the Internet*
“If qualifying information is necessary to prevent a representation from being false or misleading when read on its own, businesses should present that information clearly and conspicuously. Businesses frequently use disclaimers, often signalled by an asterisk, to qualify the general impression of their principal representation when promoting their products or services. As mentioned earlier, the general impression conveyed by the representation, as well as its literal meaning, are taken into account in determining whether a representation is false or misleading.”
Explanation: Section 4.1 applies to Realtor.ca house price indices and other methodologies. A disclaimer in this case would be located within the same small paragraph above the figures. Instead, they use their own house price index to obfuscate their methodologies (Figure 2). Another option they give is below the graphs as “Ask a realtor for more detailed information” which creates an additional barrier to the users right under the Competition Act. Specifically, the “to qualify the general impression of their principal representation when promoting their products or services.” The “ask a realtor” hyperlink brings you to an additional page where you can find their realtors in your area. This is incentivizing the user to use their services over others to access more information. Realtor.ca has a majority market share in New Brunswick which further reinforces their monopolistic practices over real estate that hurts consumers.
*Section 4.1.3, Paragraph 1 of the Application of the Competition Act to Representations on the Internet*
“Businesses may effectively draw attention to a disclaimer so that it is more likely to be read by using attention-grabbing tools to display the disclaimer. In doing so, businesses must be careful not to design attention-grabbing tools in other parts of the advertisement in such a way that they distract the consumer’s attention away from the disclaimer, making it unlikely that the consumer will notice the disclaimer or recognize its importance.”
Explanation: Section 4.1.3 is further evidence of obfuscation and misrepresentation of their graphical aids and calculations. Similar to section 2.2 in the Application of the Competition Act to Representations on the Internet, Realtor.ca placed those figures at the bottom of the first page of listings to draw the user’s attention to their interpretation of data.
*Section 52 (1) of the Competition Act: False or misleading representations*
“No person shall, for the purpose of promoting, directly or indirectly, the supply or use of a product or for the purpose of promoting, directly or indirectly, any business interest, by any means whatever, knowingly or recklessly make a representation to the public that is false or misleading in a material respect.”
Explanation: Section 52 (1) is the main argument for this report. I believe that Realtor.ca knowingly or recklessly misrepresented the average house price in Fredericton using deceptive graphical aids and created a home price index to further obfuscate the methodology.
I am not a lawyer, so I could be misinterpreting the sections of the Competition Act. I believe Realtor.ca has reached the threshold of violating the Competition Act since Section 52.1.1 states:
“For greater certainty, in establishing that subsection (1) was contravened, it is not necessary to prove that (a) any person was deceived or misled; (b) any member of the public to whom the representation was made was within Canada; or (c) the representation was made in a place to which the public had access.”
This amendment to the Competition Act removed the threshold of proving that an individual or the public were deceived or misled. I believe that Realtor.ca has violated all three elements of section 52.1.1 ensuring that they have met the threshold of violating section 52.1 of the Competition Act.
**Conclusion**
I have given numerous caveats to my analysis, so it is possible I have come to the wrong conclusions given the lack of transparency in methodology and limited time frame. One thing I can conclude with certainty, is that Realtor.ca is misrepresenting market conditions through their figures displaying average house prices, pay gates to information, and methodology disclosures guised as a patented as a housing price index. I believe that Realtor.ca should make it clear to the user how their housing price index is calculated. Realtor.ca and the MLS system has succeeded in market capture and fights to keep this information pay gated to only people that benefit from these misleading claims. Regardless of their reasons, these monopolistic practices only benefit anyone under their system through the restriction of information to shape the way the public perceives the market conditions, a clear violation of the Competition Act and a disservice to the public.
There was a lot more I wanted to cover like if Statistics Canada (u/StatCanada) sourced their data from the MLS system and the broader implications of sourcing data that could be misrepresentation. Again, I could be wrong and would welcome any additional relevant information.
https://preview.redd.it/awfmkl0x6l0d1.png?width=1681&format=png&auto=webp&s=c9c4be8b6139c4f079ff343637b159b85e79cd3b
https://preview.redd.it/za540m0x6l0d1.png?width=3816&format=png&auto=webp&s=8c16fdcbc34795f46b38bdf502e1576fb43887dd
https://preview.redd.it/h5lz8p0x6l0d1.png?width=4166&format=png&auto=webp&s=3a76bdd71e64435fbaa38a768469d287b508946a
https://preview.redd.it/5qz74m0x6l0d1.png?width=3262&format=png&auto=webp&s=ab9363605bd6b31f324b5bb58f1fcc847f17a67b
submitted by PunnyHeals to newbrunswickcanada [link] [comments]


http://rodzice.org/