Skip to main content

Nvidia GPU Overclocking OC In Linux

Note:

Optimizing GPU to its peak means getting maximum out of it and stressing it more. According to me, OC will reduce the life of the GPU. Hence please tune the GPU with some benchmarking tool. Double check GPU temperature. I am keeping the power to as low as possible especially for 1060 which actually give you good hasrate.

Enable GPU tuning using nvidia-settings utility

     sudo nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --enable-all-gpus
     

For headless systems, you should configure xorg with dummy configuration

     sudo nvidia-xconfig -a --force-generate --allow-empty-initial-configuration --cool-bits=28 --no-sli --connected-monitor="DFP-0"

Tuning Power Consumption for 1070/1060/1050ti



GTX 1070 (Expected Hash rate for ETH ~30 MH/s):

     nvidia-smi -pm 1
     nvidia-smi -pl 110

GTX 1060 (Expected Hash rate for ETH ~22 MH/s):

     nvidia-smi -pm 1
     nvidia-smi -pl 90

GTX 1050 Ti (Expected Hash rate for ETH ~13.1 MH/s):

     nvidia-smi -pm 1
     nvidia-smi -pl 60


Setting FAN speed and Memory clock speed. Similarly, do it for all GPU available in the system



GTX 1070 (~30 MH/s Solo ETH):

     sudo nvidia-settings -c :0 -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1100
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUGraphicsClockOffset[3]=-200
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUFanControlState=1
     sudo nvidia-settings -c :0 -a [fan:0]/GPUTargetFanSpeed=80

GTX 1060 (~21.5 MH/s Solo ETH):

     sudo nvidia-settings -c :0 -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1600
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUGraphicsClockOffset[3]=-160
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUFanControlState=1
     sudo nvidia-settings -c :0 -a [fan:0]/GPUTargetFanSpeed=80

GTX 1050 Ti (~13.1 MH/s Solo ETH):

     sudo nvidia-settings -c :0 -a [gpu:0]/GPUMemoryTransferRateOffset[2]=600
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUGraphicsClockOffset[2]=-100
     sudo nvidia-settings -c :0 -a [gpu:0]/GPUFanControlState=1
     sudo nvidia-settings -c :0 -a [fan:0]/GPUTargetFanSpeed=80


NVIDIA Auto OC Script

#!/bin/bash
##
#Define Environment Variable 
#
MemoryOffset="1600"
ClockOffset="-200"
FanSpeed="80"
export DISPLAY=:0
xset -dpms
xset s off
xhost +

##Create xorg.conf with cool bits. I will use 31.. please check the manual properly 
nvidia-xconfig -a --force-generate --allow-empty-initial-configuration --cool-bits=31 --no-sli --connected-monitor="DFP-0"
echo "In case of any error please reboot and run this script again"

# Paths to the utilities we will need
SMI='/usr/bin/nvidia-smi'
SET='/usr/bin/nvidia-settings'

# Determine major driver version
VER=$(awk '/NVIDIA/ {print $8}' /proc/driver/nvidia/version | cut -d . -f 1)

# Drivers from 285.x.y on allow persistence mode setting
if [ "${VER}" -lt 285 ]
then
    echo "Error: Current driver version is ${VER}. Driver version must be greater than 285."; exit 1;
fi

    $SMI -pm 1 # enable persistance mode
    $SMI -i 0,1,2,3,4 -pl 90

    echo "Applying Settings"

    # how many GPU's are in the system?
    NUMGPU="$(nvidia-smi -L | wc -l)"

    # loop through each GPU and individually set fan speed
    n=0
    while [  $n -lt  "$NUMGPU" ];
    do
        # start an x session, and call nvidia-settings to enable fan control and set speed
        ${SET} -c :0 -a [gpu:${n}]/GPUFanControlState=1 -a [fan:${n}]/GPUTargetFanSpeed=$FanSpeed
        ${SET} -c :0 -a [gpu:${n}]/GpuPowerMizerMode=1
        ${SET} -c :0 -a [gpu:${n}]/GPUMemoryTransferRateOffset[3]=$MemoryOffset
        ${SET} -c :0 -a [gpu:${n}]/GPUGraphicsClockOffset[3]=$ClockOffset
        let n=n+1
    done

    echo "Complete"; exit 0;

Comments

  1. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  2. hey!
    thanks for posting this as it's just what I have been looking for thanks, as I recently received a Nvidia GTX 1060 and I'm trying to squeeze a little more performance out of it in my little mini test rig!

    However, I cannot see or confirm any change in memory or graphics clock speeds...

    The output from:
    # nvidia-smi -q -i 1 -d CLOCK

    shows that nvidia driver version 390.25 is in use:

    Timestamp : Mon Mar 12 16:23:50 2018
    Driver Version : 390.25

    and that the current clock speed settings are:

    Attached GPUs : 2
    GPU 00000000:0B:00.0
    Clocks
    Graphics : 1860 MHz
    SM : 1860 MHz
    Memory : 3802 MHz
    Video : 1670 MHz


    with Max Clocks:

    Max Clocks
    Graphics : 1961 MHz
    SM : 1961 MHz
    Memory : 4004 MHz
    Video : 1708 MHz


    so according to this I can "safely" boost the memory clock by 202MHz to 4004MHz, so the command to use is:

    # nvidia-settings -c :0 -a [gpu:1]/GPUMemoryTransferRateOffset[3]=202

    but I see no change to the current clock speed!
    same applies if I try to change the graphics clock speed, and I also get an error when trying to set the GPU fans to a certain %, EG:

    # nvidia-settings -c :0 -a [gpu:1]/GPUFanControlState=1
    # echo $?
    0

    # nvidia-settings -c :0 -a [fan:0]/GPUTargetFanSpeed=80

    ERROR: Error assigning value 80 to attribute 'GPUTargetFanSpeed' (ubu16:0[fan:0]) as specified in assignment
    '[fan:0]/GPUTargetFanSpeed=80' (Unknown Error).


    ReplyDelete
    Replies
    1. This is because you are trying headless config..

      in SSH console try below script

      #!/bin/bash

      MemoryOffset="1600"
      ClockOffset="-200"
      FanSpeed="80"

      DISPLAY=:0
      xset -dpms
      xset s off
      xhost +

      ##Create xorg.conf with cool bits. I will use 31.. please check the manual properly
      nvidia-xconfig -a --force-generate --allow-empty-initial-configuration --cool-bits=31 --no-sli --connected-monitor="DFP-0"


      # Paths to the utilities we will need
      SMI='/usr/bin/nvidia-smi'
      SET='/usr/bin/nvidia-settings'

      # Determine major driver version
      VER=`awk '/NVIDIA/ {print $8}' /proc/driver/nvidia/version | cut -d . -f 1`

      # Drivers from 285.x.y on allow persistence mode setting
      if [ ${VER} -lt 285 ]
      then
      echo "Error: Current driver version is ${VER}. Driver version must be greater than 285."; exit 1;
      fi

      $SMI -pm 1 # enable persistance mode
      $SMI -i 0,1,2,3,4 -pl 90

      echo "Applying Settings"

      # how many GPU's are in the system?
      NUMGPU="$(nvidia-smi -L | wc -l)"

      # loop through each GPU and individually set fan speed
      n=0
      while [ $n -lt $NUMGPU ];
      do
      # start an x session, and call nvidia-settings to enable fan control and set speed
      ${SET} -c :0 -a [gpu:${n}]/GPUFanControlState=1 -a [fan:${n}]/GPUTargetFanSpeed=$FanSpeed
      ${SET} -c :0 -a [gpu:${n}]/GpuPowerMizerMode=1
      ${SET} -c :0 -a [gpu:${n}]/GPUMemoryTransferRateOffset[3]=$MemoryOffset
      ${SET} -c :0 -a [gpu:${n}]/GPUGraphicsClockOffset[3]=$ClockOffset
      let n=n+1
      done

      echo "Complete"; exit 0;

      Delete
    2. Thanks for the info and update.
      I managed to figure out myself it was because the commands I was trying wasn't actually connecting to the driver via Xorg, so my work around was simply to export the DISPLAY variable then allow connections from anywhere, EG:

      export DISPLAY=:0
      xhost +

      Also, I had the issue of the xorg.conf file being re-written each time Xorg or the machine was restarted, so to counter this I made the file read only and immutable, after making the changes to it I wanted, which included:

      Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerLevel=0x1; PowerMizerDefault=0x1; PowerMizerDefaultAC=0x1"
      Option "AllowEmptyInitialConfiguration" "True"
      Option "Coolbits" "28"


      And I am currently playing with mining MinexCoin (Equihash 96/5 algorithm), using a compiled binary called lolMiner and with the GTX 1060 I can manage a consistent solution rate of about 12.5KSol/s with a bit of overclocking, but do have to run it at maximum (120W) power and fan speed 75% to keep the temperature down.

      If I lower the graphics clock like you suggest the rate drops, but I haven't tried a big +1600 to the memory clock only +400.

      Is it also possible to adjust the core voltage or is there no need?

      For more info on lolMiner and MinexCoin mining, look here:
      https://bitcointalk.org/index.php?topic=2933939.0

      Delete
    3. 1. After your first update, I rewrote the blog and updated my script to support headless.
      2. My tuning are meant for Etherium+ethminer+Linux+Gigabyte 1060 6GB.
      3. I have not touched core voltage
      4. Thanks a lot for giving a minable coin idea "MinexCoin". I will do some research on it. Please share if you have more details about that coin.

      Delete
    4. I can recommend the MinexCoin as one worthwhile to mine, as any you create can be "parked" in the MinexBank (https://minexbank.com/) and earn you additional coins!
      Depending on the size of your rig, the parking rate is often better than what small rigs can output, so there is a better ROI just buying the MNX from an exchange and parking it.

      Delete
  3. Nice post. I’m looking forward to seeing more useful posts on this blog. In particular, tips on getting AMD cards working well with Linux.

    +1600 mem. Wow. What memory is this suitable for? I suspect this would causes crashes on some cards.

    ReplyDelete
    Replies
    1. 1600 is good for Nvidia 1060. To be frank I do not have an AMD card as of now. I would write about it once I get one.

      I am writing blog on how to use "ethminer" rather then using Claymore. I am getting 20+MH using this. I will tell why not to use Claymore :)

      Delete

Post a Comment

Popular posts from this blog

Useful Nvidia Commands

Query GPU name, GPU Bus ID, BIOS Version nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv Query GPU metrics useful for automation nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max, pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory, memory.total,memory.free,memory.used --format=csv -l 5 Enable Persistence Mode nvidia-smi -pm 1 Supported Clock nvidia-smi -ac Set One Of the Supported clocks nvidia-smi –q –d SUPPORTED_CLOCKS Query Current Clock Settings nvidia-smi -q –d CLOCK Query Current Clock Settings nvidia-smi -q –d CLOCK Reset Clock To Base nvidia-smi --rac Set Power Cap. Maximum wattage the GPU will use nvidia-smi –pl N The command that provides continuous monitoring of detail stats such as power nvidia-smi stats -i -d pwrDraw nvidia-smi --query-gpu=index,timestamp,power.draw,clocks.sm,clocks.mem

Mounting Disk Image in Linux

How to mount ".img" file with multiple partitions;  1. First, check partition table of the image by using fdisk; [root@Linux]# ls Ubuntu.img hive.img [root@Linux]# fdisk -l  Ubuntu.img Disk hive.img: 7549 MB, 7549747200 bytes, 14745600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x244b7fbe    Device Boot      Start         End      Blocks   Id  System Ubuntu .img1            2048       43007       20480    e  W95 FAT16 (LBA) Ubuntu .img2   *       43008    14690303     7323648   83  Linux 2. Now we need to mount hive.img2 means the second partition of the disk image; Now offset of hive.img2 is "Start Sector" X "Block Size" mount -o loop,offset=$((43008*512))  Ubuntu.img  /mnt [root@Linux]# df -hP /mnt Filesystem      Size  Used Avail Use% Mounted on /dev/l