Fortios fortigate firmware linux vm

PopOS on Thinkpad 2-in-1 Gen 9 (2024)

2024.05.13 22:48 GMation PopOS on Thinkpad 2-in-1 Gen 9 (2024)

I'm looking for guidence to installing PopOS (or any linux distro) on a new Thinkpad 2-in-1. My ultimate goal is to use Linux as my daily driver, with a Windoze VM to use only when necessary. I'm basically a noob with Linux, at least as a desktop / laptop work station.
PopOS is installed
The first issue is that th OS does not recognizing the trackpad. No point, no click, nothing. The mouse does move using the pointing stick but still no way to click. Any suggestions how to resolve this? (The bios does work with the trackpad, so its not a hardware issue.)
submitted by GMation to pop_os [link] [comments]


2024.05.13 22:44 DoronumaNoRudeus Seeking advice for OSCP setup

Maybe this has been asked before but what do you think is the best way to tackle OSCP exam and labs : VM (virtual box or VMware ?) or just install Kali as a main OS ?
Up until now I have been doing the course exercises on my main PC i7 with 32 GB ram. It has linux by default (Ubuntu). I installed virtual box and Kali official image made for Virtual box.
To be honest it lagged only once due to drag and drop and it took me two hours to get this back to a working state. Very rarely the copy and paste function of Virtualbox does not work.
I don't want to have these issues during the exam.
I have another PC i5 8th gen with 8 GB Ram. Should I just install Kali on it and continue with the labs there ?
Having Kali as the main OS would be more convenient (no need to deal with all that Virtual Box bugs on Linux)
However, I know it is not good to have Kali as a main OS. First it is an open space for an attacker if he gets on it (already have all the tools he needs). Plus Kali is not made for day to day use. If a bug happens to me in my Kali main OS during the exam I am just done wether in my VM I just restore a snapshot ... (in the past in my young days, I deleted the NetworkManager while using aircrack-ng on kali, it took me a long time to fix...)
What do you think ?
submitted by DoronumaNoRudeus to oscp [link] [comments]


2024.05.13 22:24 METDeath Gaming remote desktop WITH clipboard access?

I have a Windows 10 VM with a GTX 1650 Super that I use for some simpler games that don't support cloud saves for whatever reason. I'd like to convert this to Linux and to stop using Parsec. However, one of the big features of Parsec and Chrome Remote Desktop that I do use is clipboard sync. Moonlight/Sunshine doesn't support this, plus if I connect from a new device I have to remote in via another program anyway to enable remote access.
I've just recently found out about Nomachine, but haven't had a chance to try it yet.
Ideally it would be fully self hosted, no finding/proxy server (Parsec/Chrome Remote Desktop) and not require me to already have the VM screen up to add something to it.
I will probably be running either EndevourOS or Nobara, maybe Bazzite. I would also need to log in to the system if it gets rebooted, or locked.
submitted by METDeath to linux4noobs [link] [comments]


2024.05.13 21:57 CryptoNiight Root filesystem is full

Apparently, the swap file has almost completely filled the root filesystem partition. I ran swapoff to disable the swap file, but I'm not sure about what to do next. Please advise. Thanks in advance.
EDIT: Linux is running in a VM on a Synology NAS. The swap is actually a folder, not a file. The swap folder is in the root partition. I apologize for the confusion.
submitted by CryptoNiight to linuxquestions [link] [comments]


2024.05.13 20:43 zfsbest Tutorial / HOWTO migrate a PVE ZFS boot/root mirror to smaller disks (256GB to 128GB)

Tested in a VM, procedure is not perfect but should give you a decent concept of what is needed.
Docs to have on hand:
https://forum.proxmox.com/threads/replace-512gb-ssds-with-500gb-ssds.143077/
https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev
https://pve.proxmox.com/wiki/Host_Bootloader#sysboot_proxmox_boot_tool
https://forum.proxmox.com/threads/fixing-uefi-boot.87719/
Original mirror drives: 2x256GB
vda, vdb
Replacement mirror drives: 2x128GB
vdc, vdd
Recreate partitions 1-3 from original boot drive vda on new smaller disks - NOTE cannot use gdisk for this, will not do the starting sectors right!
Do: (this is the imperfect part - if someone has a better method please post - doing ' sgdisk -R ' to clone the partition layout does NOT work with smaller disks)
You can try something like this but I haven't tested it:
sgdisk -g \
-n 1:0:+1M \
-n 2:0:+1G \
-n 3:0:0 \
-t 1:8300 \
-t 2:EF00 \
-t 3:BF01 \
-p /dev/disk/by-id/yourNewDisk # do on both vdc, vdd
What I did:
dd if=/dev/vda of=/dev/vdc bs=1M count=1025
dd if=/dev/vdb of=/dev/vdd bs=1M count=1025
At this point I had to fix partition 3 on the new smaller drives (since the partition table still thinks it's 256GB but it's only 128) with gdisk - delete 3, w, quit, go back in and create partition 3 for 128gig
Just in case:
zpool labelclear vdc3, vdd3
wipefs -a vdc3, vdd3
Make the root pool a temporary raid10:
zpool add -f rpool /dev/vdc3 # NOTE this is just a temporary imbalance, we are about to mirrostripe it
zpool attach rpool vdc3 vdd3 # this is the raid10/mirror part
zpool detach vda3
zpool remove vdb3 # This removes the original 256GB mirror vdev and copies everything over to the 128GBs
-- NOTE - Wait for the resilver to finish - check with ' zpool status -v ' --
Now we need to make sure both of the new disks can boot:
proxmox-boot-tool status # Note if grub is being used or not here
proxmox-boot-tool format /dev/vdc2 --force #
proxmox-boot-tool init /dev/vdc2 grub #
proxmox-boot-tool format /dev/vdd2 --force #
proxmox-boot-tool init /dev/vdd2 grub #
proxmox-boot-tool refresh
Now to shutdown, Issue halt -p / remove the old disks / reboot and issue:
proxmox-boot-tool clean # remove old disk entries
Successfully tested boot with only new-disk 2, write 20GB random file, and then reattach disk1 after shutdown/reboot = OK, resilvered disk 1 with new / missing data
Last, zpool clear rpool
submitted by zfsbest to Proxmox [link] [comments]


2024.05.13 20:10 ozikov thread with sonoff zbdongle-e

I just bought a sonoff zbdongle-e and I am trying to follow the next article: https://smarthomescene.com/guides/how-to-enable-thread-and-matter-support-on-sonoff-zbdongle-e/ to try and make the sonoff dongle act as a thread border router. I got home assistant running in a vm and it does see the dongle. But it now I need to install the Silicon Labs Multiprotocol Add-on but I can't find it in the add-on store. I tried to configure it without installing it but that gives some error that its running the wrong firmware but I did do exactly like the article discripes on how to intall the correct firmware. I want
submitted by ozikov to homeassistant [link] [comments]


2024.05.13 20:05 jbiz143 Colima and docker instability: randomly freezing, requiring reset

I've been trying to troubleshoot an issue with Colima I've been seeing for weeks.
Colima will install and start fine, and I can start my ~20 containers without issue. However, mostly randomly, colima and docker will suddenly be completely unresponsive and containers will stop.
The only remedy is to restart the mac and run colima delete and start over.
I'm running colima on an Intel i9 with 32GB RAM.
I'm hoping someone has been able to resolve the instability issues with colima running docker on a Mac. The performance of docker on colima is so much better than Docker Desktop and I'd rather not have to go back to it!
Here's the default.yaml:
# Number of CPUs to be allocated to the virtual machine. # Default: 2 cpu: 16 # Size of the disk in GiB to be allocated to the virtual machine. # NOTE: changing this has no effect after the virtual machine has been created. # Default: 60 disk: 120 # Size of the memory in GiB to be allocated to the virtual machine. # Default: 2 memory: 24 # Architecture of the virtual machine (x86_64, aarch64, host). # Default: host arch: x86_64 # Container runtime to be used (docker, containerd). # Default: docker runtime: docker # Set custom hostname for the virtual machine. # Default: colima # colima-profile_name for other profiles hostname: colima # Kubernetes configuration for the virtual machine. kubernetes: # Enable kubernetes. # Default: false enabled: false # Kubernetes version to use. # This needs to exactly match a k3s version https://github.com/k3s-io/k3s/releases # Default: latest stable release version: v1.28.3+k3s2 # Additional args to pass to k3s https://docs.k3s.io/cli/server # Default: traefik is disabled k3sArgs: - --disable=traefik # Auto-activate on the Host for client access. # Setting to true does the following on startup # - sets as active Docker context (for Docker runtime). # - sets as active Kubernetes context (if Kubernetes is enabled). # Default: true autoActivate: true # Network configurations for the virtual machine. network: # Assign reachable IP address to the virtual machine. # NOTE: this is currently macOS only and ignored on Linux. # Default: false address: false # Custom DNS resolvers for the virtual machine. # # EXAMPLE # dns: [8.8.8.8, 1.1.1.1] # # Default: [] dns: [] # DNS hostnames to resolve to custom targets using the internal resolver. # This setting has no effect if a custom DNS resolver list is supplied above. # It does not configure the /etc/hosts files of any machine or container. # The value can be an IP address or another host. # # EXAMPLE # dnsHosts: # example.com: 1.2.3.4 dnsHosts: {} # ===================================================================== # # ADVANCED CONFIGURATION # ===================================================================== # # Forward the host's SSH agent to the virtual machine. # Default: false forwardAgent: false # Docker daemon configuration that maps directly to daemon.json. # https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file. # NOTE: some settings may affect Colima's ability to start docker. e.g. `hosts`. # # EXAMPLE - disable buildkit # docker: # features: # buildkit: false # # EXAMPLE - add insecure registries # docker: # insecure-registries: # - myregistry.com:5000 # - host.docker.internal:5000 # # Colima default behaviour: buildkit enabled # Default: {} docker: {} # Virtual Machine type (qemu, vz) # NOTE: this is macOS 13 only. For Linux and macOS <13.0, qemu is always used. # # vz is macOS virtualization framework and requires macOS 13 # # Default: qemu vmType: vz # Utilise rosetta for amd64 emulation (requires m1 mac and vmType `vz`) # Default: false rosetta: false # Volume mount driver for the virtual machine (virtiofs, 9p, sshfs). # # virtiofs is limited to macOS and vmType `vz`. It is the fastest of the options. # # 9p is the recommended and the most stable option for vmType `qemu`. # # sshfs is faster than 9p but the least reliable of the options (when there are lots # of concurrent reads or writes). # # Default: virtiofs (for vz), sshfs (for qemu) mountType: virtiofs # Propagate inotify file events to the VM. # NOTE: this is experimental. mountInotify: true # The CPU type for the virtual machine (requires vmType `qemu`). # Options available for host emulation can be checked with: `qemu-system-$(arch) -cpu help`. # Instructions are also supported by appending to the cpu type e.g. "qemu64,+ssse3". # Default: host cpuType: "" # Custom provision scripts for the virtual machine. # Provisioning scripts are executed on startup and therefore needs to be idempotent.# Default: [] provision: [] # Modify ~/.ssh/config automatically to include a SSH config for the virtual machine. # SSH config will still be generated in ~/.colima/ssh_config regardless. # Default: true sshConfig: true # Configure volume mounts for the virtual machine. # Colima mounts user's home directory by default to provide a familiar # user experience. # Colima default behaviour: $HOME and /tmp/colima are mounted as writable. # Default: [] mounts: - location: /Volumes/Drive writable: true # Environment variables for the virtual machine. # # EXAMPLE # env: # KEY: value # ANOTHER_KEY: another value # # Default: {} env: {} 
I note the following is almost always present in ha.stderr.log but I can't correlate it directly to the freezes.
{"error":"failed to run [ssh -F /dev/null -o IdentityFile=\"/Users/johndoe/.colima/_lima/_config/user\" -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o NoHostAuthenticationForLocalhost=yes -o GSSAPIAuthentication=no -o PreferredAuthentications=publickey -o Compression=no -o BatchMode=yes -o IdentitiesOnly=yes -o Ciphers=\"^aes128-gcm@openssh.com,aes256-gcm@openssh.com\" -o User=johndoe -o ControlMaster=auto -o ControlPath=\"/Users/johndoe/.colima/_lima/colima/ssh.sock\" -o ControlPersist=yes -T -O forward -L 0.0.0.0:8009:[::]:8009 -N -f -p 49304 127.0.0.1 --]: \"\": exit status 255","level":"warning","msg":"failed to set up forwarding tcp port 8009 (negligible if already forwarded)","time":"2024-05-12T09:55:03+01:00"} 
submitted by jbiz143 to docker [link] [comments]


2024.05.13 19:41 seanthegeek Why are byte counts in syslog for a virtual IP extremely inflated?

Why are byte counts in syslog for a virtual IP extremely inflated?
I have a FortiGate running FortiOS 7.4 with a policy for accepting incoming SSH connections from specific IP addresses to a virtual IP so they can backup data over rsync.
The FortiGate policy page shows that 11.04 TB has been used by this policy between 2024/02/18 and 2024/05/13, which is about what I expect.
https://preview.redd.it/3b4r4czka80d1.png?width=607&format=png&auto=webp&s=f6557ff4d4f3c63d9db707145d2257e6a815f495
However, a sum of byte counts sent to Graylog over the same time period is massively inflated.
https://preview.redd.it/ncmxaglvc80d1.png?width=712&format=png&auto=webp&s=6996f572bf5025ef669471c8f3b90e23e330589b
Does anyone know why? It looks like this is a bug in FortiOS, but I want to make sure I'm not missing something before I open a support ticket.
The policy looks like this:
config firewall policy
edit 47
set name "Internet to archive SSH"
set uuid 4468ac36-8ea6-51ee-f4e0-f5fbc962d501
set srcintf "wan1"
set dstintf "internal"
set action accept
set srcaddr "external archive SSH access"
set dstaddr "archive SSH"
set schedule "always"
set service "ALL"
set utm-status enable
set ssl-ssh-profile "certificate-inspection"
set ips-sensor "default"
set application-list "iot"
set logtraffic all
set logtraffic-start enable
set nat enable
next
end
submitted by seanthegeek to fortinet [link] [comments]


2024.05.13 19:38 -puppyguppy- Can't connect to wifi after todays big update

EDIT: SOLVED
FIX: Updating kernel form unsupported 5.9 to later version
I hope this is not the wrong place to post this. Have been using manjaro as main os since Jan 2021. This seemed like a large update with kde6 and all.
My wifi is no longer working even though the device is detected and it seems like I have the right driver.
journalctl -p 3 -xb grep iwlwifi
[ 4.772430] iwlwifi 0000:04:00.0: enabling device (0000 -> 0002) [ 4.786500] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-36.ucode failed with error -2 [ 4.786528] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-35.ucode failed with error -2 [ 4.786692] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-34.ucode failed with error -2 [ 4.786721] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-33.ucode failed with error -2 [ 4.786749] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-32.ucode failed with error -2 [ 4.786773] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-31.ucode failed with error -2 [ 4.786794] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-30.ucode failed with error -2 [ 4.786814] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-29.ucode failed with error -2 [ 4.786836] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-28.ucode failed with error -2 [ 4.786859] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-27.ucode failed with error -2 [ 4.786886] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-26.ucode failed with error -2 [ 4.786908] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-25.ucode failed with error -2 [ 4.786936] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-24.ucode failed with error -2 [ 4.786961] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-23.ucode failed with error -2 [ 4.786987] iwlwifi 0000:04:00.0: Direct firmware load for iwlwifi-8265-22.ucode failed with error -2 [ 4.786989] iwlwifi 0000:04:00.0: no suitable firmware found! [ 4.786991] iwlwifi 0000:04:00.0: minimum version required: iwlwifi-8265-22 [ 4.786992] iwlwifi 0000:04:00.0: maximum version supported: iwlwifi-8265-36 [ 4.786994] iwlwifi 0000:04:00.0: check git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
rfkill --output-all
ID TYPE DEVICE TYPE-DESC SOFT HARD 0 bluetooth tpacpi_bluetooth_sw Bluetooth unblocked unblocked 2 bluetooth hci1 Bluetooth unblocked unblocked
lspci -v
04:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78) Subsystem: Intel Corporation Dual Band Wireless-AC 8265 Flags: fast devsel, IRQ 18 Memory at ec000000 (64-bit, non-prefetchable) [size=8K] Capabilities: [c8] Power Management version 3 Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [40] Express Endpoint, IntMsgNum 0 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ac-ed-5c-ff-ff-73-2b-39 Capabilities: [14c] Latency Tolerance Reporting Capabilities: [154] L1 PM Substates Kernel modules: iwlwifi
modinfo iwlwifi grep iwlwifi
filename: /lib/modules/5.9.16-1-MANJARO/kernel/drivers/net/wireless/intel/iwlwifi/iwlwifi.ko.xz
firmware: iwlwifi-8265-36.ucode
submitted by -puppyguppy- to ManjaroLinux [link] [comments]


2024.05.13 19:25 00and Dynamic routing on debian 11?

I have a debian 11 machine that i experiment various things on (virtualization, docker, etc.). This machine has tens of subnets for various containers and VM's. These subnets can have quite rapid ongoing rotations and changes during the day (ie. new subnet with VM's or a new docker network). Also, those subnets are only visible to that debian machine.
My goal is to have those subnets from debian machine visible to my router, so devices in my main network could reach them. So my first thought: use OSPF (mostly, because both my routers understand OSPF, and I'm familiar with it). I already did set up simple OSPF between my two routers, just to see if they "like each other", that's no problem for me.
My main hurdle is configuring OSPF on that debian machine, I tried using frr, but I couldn't quite grasp the configuration and get it working. That doesn't mean that I don't want to use it again, I'm just not quite sure how to get it going.
Here are my questions: 1. Is it even possible to do what I'm describing? 2. If it is, is dynamic routing the right approach?
Since I'm not the most proficient with Linux, any suggestions with specific commands would be greatly appreciated. Thanks for your patience in advance!
submitted by 00and to homelab [link] [comments]


2024.05.13 19:12 nemanja_jovic Packer: Help with building Windows 11 image with vsphere-iso builder

Hello community,
I have been struggling to build the Windows 11 image on VMware, some help would be greatly appreciated.
I was able to configure HCL/Packer when it comes to VMware configuration (mostly I hope).
At the moment, Windows 11 gets provisioned until the moment that system boots with the Windows OS screen, and then it hangs, I believe that WinRM is not properly started via the XML answer file. I can confirm that answer file works because Windows settings for the setup are processed and Windows installation finishes.
Also, I could not use the floppy drive, since its not supported with UEFI and TPM, so I am mapping the VMware tools from the datastore, and will include in the answer file the part to install the tools - my doubt is, is the WinRM even started if the Packer can't connect with the IP?
I am having trouble to understand how to troubleshoot the answer file, its really cumbersome.
My packer console says waiting for the IP.
Any suggestions are very welcome, thanksss!
Here's my packer configuration file:
packer { required_plugins { vsphere = { version = "~> 1" source = "github.com/hashicorp/vsphere" } } } source "vsphere-iso" "windows11" { vcenter_server = "vcenter.automation.lab" username = "administrator@automation.lab" password = "MyPassword123" datacenter = "automation.lab" cluster = "Automation Lab Cluster" folder = "packer-tests" datastore = "datastore1" host = "esxi.automation.lab" insecure_connection = "true" vm_name = "Windows11_${uuidv4()}" communicator = "winrm" winrm_password = "MyPassword123" winrm_username = "Administrator" winrm_insecure = "true" winrm_timeout = "3m" winrm_use_ssl = "true" CPUs = "4" RAM = "4096" RAM_reserve_all = true cd_files = ["setup/autounattend.xml"] disk_controller_type = ["lsilogic-sas"] firmware = "efi" guest_os_type = "windows11_64Guest" iso_paths = [ "[datastore1] ISO/Win11_23H2_English_x64v2.iso", "[datastore1] vmware_tools/windows.iso" ] boot_wait = "20s" boot_command = [ "", "", "" ] http_directory = "./setup" tools_upgrade_policy = true network_adapters { network = "VM Network" network_card = "vmxnet3" } storage { disk_size = "65536" disk_thin_provisioned = false } convert_to_template = "true" vTPM = true } build { sources = ["source.vsphere-iso.windows11"] provisioner "windows-shell" { inline = ["dir c:\\"] } } 
Here's my answer file:
     en-US  en-US en-US en-US en-US       1 Primary 499   2 EFI 100   3 Primary true     1 1 NTFS  de94bba4-06d1-4d40-a16a-bfd50179d6ac   2 2  FAT32   3 3  NTFS C   0 true       /IMAGE/NAME Windows 11 Pro    0 3  OnError false      OnError  true      Eastern Standard Time     0409:00010409 en-US en-US en-US     MyPassword123 true</PlainText> </Password> <Enabled>true</Enabled> <LogonCount>1</LogonCount> <Username>Administrator</Username> </AutoLogon> <FirstLogonCommands> <SynchronousCommand wcm:action="add"> <Order>1</Order> <CommandLine>powershell -Command "Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force";</CommandLine> </SynchronousCommand> <SynchronousCommand wcm:action="add"> <Order>2</Order> <CommandLine>powershell -Command "Invoke-WebRequest -Uri http://{{ .HTTPIP }}:{{ .HTTPPort }}/setup.ps1 -OutFile C:\setup.ps1; C:\setup.ps1"</CommandLine> </SynchronousCommand> </FirstLogonCommands> <OOBE> <HideEULAPage>true</HideEULAPage> <HideLocalAccountScreen>true</HideLocalAccountScreen> <HideOEMRegistrationScreen>true</HideOEMRegistrationScreen> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <UnattendEnableRetailDemo>false</UnattendEnableRetailDemo> <ProtectYourPC>1</ProtectYourPC> </OOBE> <UserAccounts> <AdministratorPassword> <Value>MyPassword123</Value> <PlainText>true</PlainText> </AdministratorPassword> </UserAccounts> </component> </settings> </unattend> </pre> WinRM setup file:<br /> <pre>$ErrorActionPreference = "Stop" # Switch network connection to private mode # Required for WinRM firewall rules $profile = Get-NetConnectionProfile Set-NetConnectionProfile -Name $profile.Name -NetworkCategory Private # WinRM Configure winrm quickconfig -quiet winrm set winrm/config/service '@{AllowUnencrypted="true"}' winrm set winrm/config/service/auth '@{Basic="true"}' netsh advfirewall firewall add rule name="Windows Remote Managment (HTTP-In)" dir=in action=allow protocol=TCP localport=5985 # Reset auto logon count # https://docs.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-shell-setup-autologon-logoncount#logoncount-known-issue Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon' -Name AutoLogonCount -Value 0 Get-PsDrive out-file ~/desktop/psdrives.txt notepad ~/desktop/psdrives.txt </pre> My folder structure:<br /> <a href="?id=24513">https://preview.redd.it/l4msxgcd880d1.png?width=571&format=png&auto=webp&s=94a87c09a5340bceb9da6970755e71fcf3f0cf4f</a><br /> </div> &#32; submitted by &#32; <a href="?id=6473"> nemanja_jovic </a> &#32; to &#32; <a href="?id=16935"> hashicorp </a> <span><a href="?id=12660">[link]</a></span> &#32; <span><a href="?id=1390">[comments]</a></span></p>
<hr />
<p>2024.05.13 19:04 <i style="color:green;">Zechariah_B_</i> <b>Looking for potential causes of boot partition not working correctly. Kernel files not inside it after kernel installed.</b></p>
<p><div class="md">I installed an old kernel 6.8.5 that was still available on Fedora's repo via dnf. It did not appear in the boot partition. No config, no vmlinuz, not a single file exists. A newer kernel 6.8.7 was automatically removed and nothing was changed in the boot partition. Old files in boot were present that had no relevant files installed in the root. Was that normal? I updated grub afterwards with grub2-mkconfig -o /etc/grub2-efi.cfg. Rebooting put me in Grub's recovery command line. <strong>It seems like something important to look out for, but I have no clue how this occurred so it can be avoided. What potential things could have happened to cause this?</strong> <br /> Context: I recently had my Fedora installed computer update which unfortunately came with new kernels. dmesg blew up with problems and the Nvidia was taken down alongside missing kernel modules that were not updated for the new kernels for whatever reason. I did not know that at first. Missing WiFi, Bluetooth, and many functionalities. I removed Nvidia because it was usually a cause of that. I tried using older kernels. No change. Then I encountered another issue after installing kernel 6.8.5 with the boot partition not working right. Henceforth the concerns. <br /> I took a draconian approach to fixing this by using a Fedora Linux 40 Live ISO. I used dnf's installroot feature to install the "Fedora Workstation" group into a folder. I replaced my computer's root directory with that folder while making sure to preserve SELinux contexts to get rid of anything unusual. I wiped /boot and /boot/efi. I then mounted /boot, /boot/etc, /sys, /proc, /dev, /run accordingly then used chroot. I installed the kernels, installed grub and now the boot partition works as it should have when a kernel is installed or removed.<br /> I use Fedora Linux 40 (Workstation Edition) on a Lenovo Legion 5 15ACH6 with latest available firmware HHCN37WW. <br /> </div> &#32; submitted by &#32; <a href="?id=14329"> Zechariah_B_ </a> &#32; to &#32; <a href="?id=21160"> linuxquestions </a> <span><a href="?id=24276">[link]</a></span> &#32; <span><a href="?id=28936">[comments]</a></span></p>
<hr />
<p>2024.05.13 18:51 <i style="color:green;">ScaleApprehensive926</i> <b>Uninstalling OmsAgentForLinux in Portal Hangs</b></p>
<p><div class="md">Today I started testing installing the new AzureMonitorLinuxAgent on our VMs. I did this by running the Azure CLI command:<br /> az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids "<vm resource id>" --enable-auto-upgrade true<br /> The install appeared to work fine (we'll see tomorrow if it appeases the Advisor security check). However, I also uninstalled the old OmsAgentForLinux by simply clicking on it and selecting Uninstall. I see in the docs they only outline uninstalling it through the local shell (<a href="?id=4839">https://learn.microsoft.com/en-us/azure/azure-monitoagents/agent-manage?tabs=PowerShellLinux#linux-agent-2</a>). I have been checking the status of the VM on and off for more than an hour and it still lists the OmsAgentForLinux extension as "Transitioning" and I have a running job in my notifications that says "Deleting virtual machine extension".<br /> Should have just run the thing through the command line instead of using the Portal? Should the "transition" take this long? I won't feel comfortable in doing anything to my real VMs until this has all settled and I get good scan results from all my security stuff.<br /> </div> &#32; submitted by &#32; <a href="?id=26438"> ScaleApprehensive926 </a> &#32; to &#32; <a href="?id=2725"> AZURE </a> <span><a href="?id=21587">[link]</a></span> &#32; <span><a href="?id=17894">[comments]</a></span></p>
<hr />
<p>2024.05.13 18:49 <i style="color:green;">mohanex2001</i> <b>If you had the chance to start your cybersecurity career from zero would you go for academic research or industry ?</b></p>
<p><div class="md">Hello all,<br /> In august, I will be graduating with embedded cybersecurity master's degree. I actually have some good experience throughout my high education internships: 1-Internship at IT for a month in a small company (Linux-Windows-Networking...) 2-Internship for 3 months in a research center(GNU RADIO - C++ - Linux - Networking...) 3-Internship for 4 months in a small company (FPGA - Firmware - Electronics - Linux...) 4-Now im in a 11 months apprenticeship in the Hardware security (Pentesting - FPGA - Linux ...)<br /> So my question is, would I rather take a job in a rsearch center as an academic researcher engineer or go for an industry job ? what do you suggest? Hence : I live in Europe<br /> </div> &#32; submitted by &#32; <a href="?id=13190"> mohanex2001 </a> &#32; to &#32; <a href="?id=14868"> careerguidance </a> <span><a href="?id=4346">[link]</a></span> &#32; <span><a href="?id=27332">[comments]</a></span></p>
<hr />
<p>2024.05.13 18:28 <i style="color:green;">thebwack</i> <b>Help using my PC build to share apps with colleagues.</b></p>
<p><div class="md">I've got a AMD Threadripper build 64gb ram, RTX4090, that I use at work for 3D rendering (mostly unreal engine). I want to start self hosting apps (Ai tools) for the team to use (we've got a few up and running on various computers here) so everyone can use the 4090 when they need to crunch a bunch of stuff at once. Basically want to have the apps as high-availability as possible. I don't use the PC that often and when I do I could suspend those services for the time. <br /> Main question is best way to host them so that they each get GPU access but in a background kind of way. The PC is rarely rebooted. <br /> My initial thought is Docker desktop, which I believe will allow the GPU support in most recent versions. I also thought about making it a Linux machine and running Windows as a VM inside that when needed or if possible linux sharing the GPU with the windows VM. <br /> Just curious if anything better than these comes to mind.<br /> </div> &#32; submitted by &#32; <a href="?id=7234"> thebwack </a> &#32; to &#32; <a href="?id=4877"> selfhosted </a> <span><a href="?id=11763">[link]</a></span> &#32; <span><a href="?id=19243">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:41 <i style="color:green;">EchoJobs</i> <b>Hiring Linux Firmware Engineer/ Developer  USD 125k-130k Los Angeles, CA US [Python]</b></p>
<p>&#32; submitted by &#32; <a href="?id=11940"> EchoJobs </a> &#32; to &#32; <a href="?id=13860"> joblead </a> <span><a href="?id=13360">[link]</a></span> &#32; <span><a href="?id=28726">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:41 <i style="color:green;">EchoJobs</i> <b>Hiring Linux Firmware Engineer/ Developer  USD 125k-130k Los Angeles, CA US [Python]</b></p>
<p>&#32; submitted by &#32; <a href="?id=11120"> EchoJobs </a> &#32; to &#32; <a href="?id=21625"> echojobs </a> <span><a href="?id=20639">[link]</a></span> &#32; <span><a href="?id=25984">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:41 <i style="color:green;">EchoJobs</i> <b>Hiring Linux Firmware Engineer/ Developer  USD 125k-130k Los Angeles, CA US [Python]</b></p>
<p>&#32; submitted by &#32; <a href="?id=416"> EchoJobs </a> &#32; to &#32; <a href="?id=6323"> CodingJobs </a> <span><a href="?id=24048">[link]</a></span> &#32; <span><a href="?id=11737">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:40 <i style="color:green;">EchoJobs</i> <b>Hiring Linux Firmware Engineer/ Developer  USD 125k-130k Los Angeles, CA US [Python]</b></p>
<p>&#32; submitted by &#32; <a href="?id=8536"> EchoJobs </a> &#32; to &#32; <a href="?id=11369"> pythonjob </a> <span><a href="?id=9113">[link]</a></span> &#32; <span><a href="?id=12053">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:29 <i style="color:green;">RoganDawes</i> <b>Aurga Viewer firmware examination</b></p>
<p><div class="md">In case anyone else is curious, I downloaded the Windows application, figured out how it fetches updated firmware for Aurga Viewer, downloaded it and did some analysis.<br /> Firstly, download the Windows 8+ app from <a href="?id=7173">https://www.aurga.com/pages/download</a>.<br /> If you don't want to install it, you can extract the installer using 7Zip:<br /> 7z e AURGAViewer_Installer_x64_v1.1.0.2.exe<br /> Searching for strings in AURGAViewer.exe gives /fw/latest.img. Then you can fetch that from <a href="?id=22790">https://www.aurga.com/fw/latest.img</a>, which is a redirect to <a href="?id=6871">https://cdn.shopify.com/s/files/1/0627/4659/1401/files/240427225356.img</a><br /> Running binwalk on that shows:<br /> binwalk 240427225356.img DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 49152 0xC000 JFFS2 filesystem, little endian 212992 0x34000 Flattened device tree, size: 14249 bytes, version: 17 229376 0x38000 Linux kernel ARM boot executable zImage (little-endian) 254904 0x3E3B8 xz compressed data 255325 0x3E55D xz compressed data 2994176 0x2DB000 Squashfs filesystem, little endian, version 4.0, compression:xz, size: 5222500 bytes, 670 inodes, blocksize: 1048576 bytes, created: 2024-04-27 14:53:58<br /> You can then slice and dice the JFFS2 and squashfs filesystems from the image:<br /> dd if=240427225356.img bs=1 skip=49152 count=$((0x34000-0xc000)) of=jffs dd if=240427225356.img bs=1 skip=$((0x2DB000)) of=squashfs<br /> The squashfs image is easy to examine, just mount it using the loopback:<br /> sudo mount -o loop squashfs /mnt<br /> The JFFS2 filesystem is a little more complicated to unpack, because it expects to be on a MTD device. Fortunately, there is a Python program that will unpack them for you - Jefferson:<br /> pip3 install jefferson<br /> jefferson jffs<br /> writing S_ISDIR etc writing S_ISDIR work writing S_ISDIR etc/config writing S_ISREG etc/config/dnsmasq1.conf writing S_ISREG etc/config/dnsmasq2.conf writing S_ISREG etc/config/dnsmasq_p2p.conf writing S_ISREG etc/config/nvram_ap6256.txt writing S_ISREG etc/config/start_p2p writing S_ISREG etc/config/start_wifi writing S_ISREG etc/config/wpa_supplicant.conf<br /> And there you go. I still need to do a bit more digging, but it appears that the root account has no password (shadow entry is empty), and there should be a serial console active if you crack it open and find the right pins to connect to.<br /> /usbin/setup_gadgets has code for setting up the USB keyboard, mouse and touch interfaces, but I have not yet found the code that actually calls that binary. I have found details of the WiFi card (SDIO BCM4345C5) and the HDMI-CSI2 bridge (Toshiba tc35874x). I have not found out how the firmware can be updated over USB, perhaps there are more apps that set up the UDC. I guess it could be done over bluetooth (i.e. reconfigure the USB device if it sees a poke). I suppose digging further into the Windows executable would provide that detail.<br /> If anyone who actually has an Aurga Viewer would like to crack it open and post high res pictures of the board, that would be amazing.<br /> EDIT: for those that wonder why this might be useful, I have seen folks looking for a way to include the video stream in OBS. This could allow you to add an RTSP stream server to the firmware, that OBS could consume. Have the AURGA present a USB Mass Storage device to the target, backed by a Network Block Device (nbd), which could be used to boot a new/unresponsive device. Replace the vendor's remote desktop interface with VNC. Or possibly make the hardware do other interesting things, limited only by your imagination (and the capabilities of the hardware, of course!)<br /> </div> &#32; submitted by &#32; <a href="?id=16627"> RoganDawes </a> &#32; to &#32; <a href="?id=28847"> Aurga </a> <span><a href="?id=4967">[link]</a></span> &#32; <span><a href="?id=11045">[comments]</a></span></p>
<hr />
<p>2024.05.13 17:09 <i style="color:green;">Sold4kidneys</i> <b>Is it better to install Kali Linux on a VM or as a Dual boot?</b></p>
<p><div class="md">So for starters, I will be using mostly for pen testing and also exploring about DoS and DDoS attacks, Injections, Sniffing, etc.., the basic stuff (only for educational purpose). <br /> But before I do any of it, I wanna know whats safer, I will be using my main PC to run the OS, but would it be safer to have it as a VM or a Dual Boot? <br /> I will be reading the pinned thread after this, but I would also like to know if there's any prerequisite software that I may need to install that could come in handy for added security or to make said activities more 'efficient' or convenient <br /> </div> &#32; submitted by &#32; <a href="?id=16369"> Sold4kidneys </a> &#32; to &#32; <a href="?id=23173"> hacking </a> <span><a href="?id=8180">[link]</a></span> &#32; <span><a href="?id=1078">[comments]</a></span></p>
<hr />
<p>2024.05.13 16:28 <i style="color:green;">Ambitious_Internet_5</i> <b>Best distro for me as beginner.</b></p>
<p><div class="md">I new at linux, so first i tried Ubuntu in VM and i don't like it's Desktop environment at all, after that i tried arch linux ( i know it's just a dumb idea to tried arch as beginner ), i like it's customization and because it is lightweight, but the installation from scratch gave me a shock because i just windows user come to tried linux, so what's best distro for me ?<br /> </div> &#32; submitted by &#32; <a href="?id=7494"> Ambitious_Internet_5 </a> &#32; to &#32; <a href="?id=13862"> linux4noobs </a> <span><a href="?id=21499">[link]</a></span> &#32; <span><a href="?id=22189">[comments]</a></span></p>
<hr />
<p></p><p><a href="http://swiebodzin.info">http://swiebodzin.info</a></p><p></p><h3></h3>
<ol><li></li></ol>
<p></p><div id="menu" class="menu">[ <a href="?id=7121">7121</a> ] [ <a href="?id=7122">7122</a> ] [ <a href="?id=7123">7123</a> ] [ <a href="?id=7124">7124</a> ] [ <a href="?id=7125">7125</a> ] [ <a href="?id=7126">7126</a> ] [ <a href="?id=7127">7127</a> ] [ <a href="?id=7128">7128</a> ] [ <a href="?id=7129">7129</a> ] [ <a href="?id=7130">7130</a> ] </div></div>
</body>
</html><!-- ID: 1780 | Time: 0.64837 Sec | Mem: 1102 KiB -->