close

作者 | Will Zhang
來源 | OneFlow
編輯 | 極市平台

導讀

本文討論一個經典問題Prefix Sum(前綴和),也被稱為Scan/Prefix Scan等。Scan 是諸如排序等重要問題的子問題,所以基本是進階必學問題之一。

1 問題定義

首先我們不嚴謹地定義這個問題,輸入一個數組input[n],計算新數組output[n], 使得對於任意元素output[i]都滿足:

output[i] = input[0] + input[1] + ... input[i]

一個示例如下:

如果在CPU上我們可以簡單地如下實現:

voidPrefixSum(constint32_t*input,size_tn,int32_t*output){int32_tsum=0;for(size_ti=0;i<n;++i){sum+=input[i];output[i]=sum;}}

問題來了,如何並行?而且是幾千個線程和諧地並行?這個問題里還有個明顯的依賴,每個元素的計算都依賴之前的值。所以第一次看到這個問題的同學可能會覺得,這怎麼可能並行?

而更進一步地,如何用CUDA並行,Warp級別怎麼並行,Shared Memory能裝下數據的情況怎麼並行,Shared Memory裝不下的情況如何並行等等。

2 ScanThenFan

首先我們假設所有數據都可以存儲到Global Memory中,因為更多的數據,核心邏輯也是類似的。

我們介紹的第一個方法稱為ScanThenFan,也很符合直覺,如下:

將存儲在Global Memory中的數據分為多個Parts,每個Part由一個Thread Block單獨做內部的Scan,並將該Part的內部Sum存儲到Global Memory中的PartSum數組中

對這個PartSum數組做Scan,我們使用BaseSum標識這個Scan後的數組

每個Part的每個元素都加上對應的BaseSum

如下圖

圖片3 Baseline

我們先不關注Block內如何Scan,在Block內先使用簡單的單個線程處理,得到如下代碼:

__global__voidScanAndWritePartSumKernel(constint32_t*input,int32_t*part,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){//thispartprocessinput[part_begin:part_end]//storesumtopart[part_i],output[part_begin:part_end]size_tpart_begin=part_i*blockDim.x;size_tpart_end=min((part_i+1)*blockDim.x,n);if(threadIdx.x==0){//naiveimplementionint32_tacc=0;for(size_ti=part_begin;i<part_end;++i){acc+=input[i];output[i]=acc;}part[part_i]=acc;}}}__global__voidScanPartSumKernel(int32_t*part,size_tpart_num){int32_tacc=0;for(size_ti=0;i<part_num;++i){acc+=part[i];part[i]=acc;}}__global__voidAddBaseSumKernel(int32_t*part,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){if(part_i==0){continue;}int32_tindex=part_i*blockDim.x+threadIdx.x;if(index<n){output[index]+=part[part_i-1];}}}//foriinrange(n)://output[i]=input[0]+input[1]+...+input[i]voidScanThenFan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);//usebuffer[0:part_num]tosavethemetricofpartint32_t*part=buffer;//afterfollowingstep,part[i]=part_sum[i]ScanAndWritePartSumKernel<<<block_num,part_size>>>(input,part,output,n,part_num);//afterfollowingstep,part[i]=part_sum[0]+part_sum[1]+...part_sum[i]ScanPartSumKernel<<<1,1>>>(part,part_num);//makefinalresultAddBaseSumKernel<<<block_num,part_size>>>(part,output,n,part_num);}

現在的代碼里很多樸素實現,但我們先完成一個大框架,得到此時的耗時72390us作為一個Baseline。

4 Shared Memory

接着,我們看ScanAndWritePartSumKernel函數,我們先做個簡單的優化,將單個Part的數據先Load到Shared Memory中再做同樣的簡單邏輯,如下

__device__voidScanBlock(int32_t*shm){if(threadIdx.x==0){//naiveimplementionint32_tacc=0;for(size_ti=0;i<blockDim.x;++i){acc+=shm[i];shm[i]=acc;}}__syncthreads();}__global__voidScanAndWritePartSumKernel(constint32_t*input,int32_t*part,int32_t*output,size_tn,size_tpart_num){extern__shared__int32_tshm[];for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){//storethispartinputtoshmsize_tindex=part_i*blockDim.x+threadIdx.x;shm[threadIdx.x]=index<n?input[index]:0;__syncthreads();//scanonsharedmemoryScanBlock(shm);__syncthreads();//writeresultif(index<n){output[index]=shm[threadIdx.x];}if(threadIdx.x==blockDim.x-1){part[part_i]=shm[threadIdx.x];}}}

這個簡單的優化把時間從72390us降低到了33726us,這源於批量的從Global Memory的讀取。

5 ScanBlock

接下來我們正經地優化Block內的Scan,對於Block內部的Scan,我們可以用類似的思路拆解為

按照Warp組織,每個Warp內部先做Scan,將每個Warp的和存儲到Shared Memory中,稱為WarpSum

啟動一個單獨的Warp對WarpSum進行Scan

每個Warp將最終結果加上上一個Warp對應的WarpSum

代碼如下

__device__voidScanWarp(int32_t*shm_data,int32_tlane){if(lane==0){//naiveimplementionint32_tacc=0;for(int32_ti=0;i<32;++i){acc+=shm_data[i];shm_data[i]=acc;}}}__device__voidScanBlock(int32_t*shm_data){int32_twarp_id=threadIdx.x>>5;int32_tlane=threadIdx.x&31;//31=00011111__shared__int32_twarp_sum[32];//blockDim.x/WarpSize=32//scaneachwarpScanWarp(shm_data,lane);__syncthreads();//writesumofeachwarptowarp_sumif(lane==31){warp_sum[warp_id]=*shm_data;}__syncthreads();//useasinglewarptoscanwarp_sumif(warp_id==0){ScanWarp(warp_sum+lane,lane);}__syncthreads();//addbaseif(warp_id>0){*shm_data+=warp_sum[warp_id-1];}__syncthreads();}

這一步從33726us降低到了9948us。

6 ScanWarp

接着我們優化ScanWarp。為了方便解釋算法,我們假設對16個數做Scan,算法如下圖:

橫向的16個點代表16個數,時間軸從上往下,每個入度為2的節點會做加法,並將結果廣播到其輸出節點,對於32個數的代碼如下:

__device__voidScanWarp(int32_t*shm_data){int32_tlane=threadIdx.x&31;volatileint32_t*vshm_data=shm_data;if(lane>=1){vshm_data[0]+=vshm_data[-1];}__syncwarp();if(lane>=2){vshm_data[0]+=vshm_data[-2];}__syncwarp();if(lane>=4){vshm_data[0]+=vshm_data[-4];}__syncwarp();if(lane>=8){vshm_data[0]+=vshm_data[-8];}__syncwarp();if(lane>=16){vshm_data[0]+=vshm_data[-16];}__syncwarp();}

這個算法下,每一步都沒有bank conflict,耗時也從9948us降低到了7595us。

7 ZeroPadding

接下來我們想更進一步消除ScanWarp中的if,也就是不對lane做判斷,warp中所有線程都執行同樣的操作,這就意味着之前不符合條件的線程會訪問越界,為此我們需要做padding讓其不越界。

為了實現padding,回看ScanBlock函數,其定義的warp_sum並非為kernel launch時指定的。為了更改方便,我們將其更改為kernel launch時指定,如下

__device__voidScanBlock(int32_t*shm_data){int32_twarp_id=threadIdx.x>>5;int32_tlane=threadIdx.x&31;//31=00011111extern__shared__int32_twarp_sum[];//warp_sum[32]//scaneachwarpScanWarp(shm_data);__syncthreads();//writesumofeachwarptowarp_sumif(lane==31){warp_sum[warp_id]=*shm_data;}__syncthreads();//useasinglewarptoscanwarp_sumif(warp_id==0){ScanWarp(warp_sum+lane);}__syncthreads();//addbaseif(warp_id>0){*shm_data+=warp_sum[warp_id-1];}__syncthreads();}__global__voidScanAndWritePartSumKernel(constint32_t*input,int32_t*part,int32_t*output,size_tn,size_tpart_num){//thefirst32isusedtosavewarpsumextern__shared__int32_tshm[];for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){//storethispartinputtoshmsize_tindex=part_i*blockDim.x+threadIdx.x;shm[32+threadIdx.x]=index<n?input[index]:0;__syncthreads();//scanonsharedmemoryScanBlock(shm+32+threadIdx.x);__syncthreads();//writeresultif(index<n){output[index]=shm[32+threadIdx.x];}if(threadIdx.x==blockDim.x-1){part[part_i]=shm[32+threadIdx.x];}}}__global__voidScanPartSumKernel(int32_t*part,size_tpart_num){int32_tacc=0;for(size_ti=0;i<part_num;++i){acc+=part[i];part[i]=acc;}}__global__voidAddBaseSumKernel(int32_t*part,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){if(part_i==0){continue;}int32_tindex=part_i*blockDim.x+threadIdx.x;if(index<n){output[index]+=part[part_i-1];}}}//foriinrange(n)://output[i]=input[0]+input[1]+...+input[i]voidScanThenFan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);//usebuffer[0:part_num]tosavethemetricofpartint32_t*part=buffer;//afterfollowingstep,part[i]=part_sum[i]size_tshm_size=(32+part_size)*sizeof(int32_t);ScanAndWritePartSumKernel<<<block_num,part_size,shm_size>>>(input,part,output,n,part_num);//afterfollowingstep,part[i]=part_sum[0]+part_sum[1]+...part_sum[i]ScanPartSumKernel<<<1,1>>>(part,part_num);//makefinalresultAddBaseSumKernel<<<block_num,part_size>>>(part,output,n,part_num);}

注意在ScanAndWritePartSumKernel的Launch時,我們重新計算了shared memory的大小,接下來為了做padding,我們要繼續修改其shared memory的大小,由於每個warp需要一個16大小的padding才能避免ScanWarp的線程不越界,所以我們更改ScanThenFan為:

//foriinrange(n)://output[i]=input[0]+input[1]+...+input[i]voidScanThenFan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);//usebuffer[0:part_num]tosavethemetricofpartint32_t*part=buffer;//afterfollowingstep,part[i]=part_sum[i]size_twarp_num=part_size/32;size_tshm_size=(16+32+warp_num*(16+32))*sizeof(int32_t);ScanAndWritePartSumKernel<<<block_num,part_size,shm_size>>>(input,part,output,n,part_num);//afterfollowingstep,part[i]=part_sum[0]+part_sum[1]+...part_sum[i]ScanPartSumKernel<<<1,1>>>(part,part_num);//makefinalresultAddBaseSumKernel<<<block_num,part_size>>>(part,output,n,part_num);}

注意shm_size的計算,我們為warp_sum也提供了16個數的zero padding,對應的Kernel改寫如下:

__device__voidScanWarp(int32_t*shm_data){volatileint32_t*vshm_data=shm_data;vshm_data[0]+=vshm_data[-1];vshm_data[0]+=vshm_data[-2];vshm_data[0]+=vshm_data[-4];vshm_data[0]+=vshm_data[-8];vshm_data[0]+=vshm_data[-16];}__device__voidScanBlock(int32_t*shm_data){int32_twarp_id=threadIdx.x>>5;int32_tlane=threadIdx.x&31;extern__shared__int32_twarp_sum[];//16zeropadding//scaneachwarpScanWarp(shm_data);__syncthreads();//writesumofeachwarptowarp_sumif(lane==31){warp_sum[16+warp_id]=*shm_data;}__syncthreads();//useasinglewarptoscanwarp_sumif(warp_id==0){ScanWarp(warp_sum+16+lane);}__syncthreads();//addbaseif(warp_id>0){*shm_data+=warp_sum[16+warp_id-1];}__syncthreads();}__global__voidScanAndWritePartSumKernel(constint32_t*input,int32_t*part,int32_t*output,size_tn,size_tpart_num){//thefirst16+32isusedtosavewarpsumextern__shared__int32_tshm[];int32_twarp_id=threadIdx.x>>5;int32_tlane=threadIdx.x&31;//initializethezeropaddingif(threadIdx.x<16){shm[threadIdx.x]=0;}if(lane<16){shm[(16+32)+warp_id*(16+32)+lane]=0;}__syncthreads();//processeachpartfor(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){//storethispartinputtoshmsize_tindex=part_i*blockDim.x+threadIdx.x;int32_t*myshm=shm+(16+32)+warp_id*(16+32)+16+lane;*myshm=index<n?input[index]:0;__syncthreads();//scanonsharedmemoryScanBlock(myshm);__syncthreads();//writeresultif(index<n){output[index]=*myshm;}if(threadIdx.x==blockDim.x-1){part[part_i]=*myshm;}}}

改動比較多,主要是對相關index的計算,經過這一步優化,時間從7595us降低到了7516us,看似不大,主要是被瓶頸掩蓋了。對於ScanWarp還可以用WarpShuffle來優化,為了體現其效果,我們放在後面再說,先優化當前瓶頸。

8 Recursion

當前的一個瓶頸在於,之前為了簡化,對於PartSum的Scan,是由一個線程去做的,這塊可以遞歸地做,如下:

//foriinrange(n)://output[i]=input[0]+input[1]+...+input[i]voidScanThenFan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);//usebuffer[0:part_num]tosavethemetricofpartint32_t*part=buffer;//afterfollowingstep,part[i]=part_sum[i]size_twarp_num=part_size/32;size_tshm_size=(16+32+warp_num*(16+32))*sizeof(int32_t);ScanAndWritePartSumKernel<<<block_num,part_size,shm_size>>>(input,part,output,n,part_num);if(part_num>=2){//afterfollowingstep//part[i]=part_sum[0]+part_sum[1]+...+part_sum[i]ScanThenFan(part,buffer+part_num,part,part_num);//makefinalresultAddBaseSumKernel<<<block_num,part_size>>>(part,output,n,part_num);}}

移除了之前的簡單操作後,耗時從7516us下降到了3972us。

9 WarpShuffle

接下來我們使用WarpShuffle來實現WarpScan,如下:

__device__int32_tScanWarp(int32_tval){int32_tlane=threadIdx.x&31;int32_ttmp=__shfl_up_sync(0xffffffff,val,1);if(lane>=1){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,2);if(lane>=2){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,4);if(lane>=4){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,8);if(lane>=8){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,16);if(lane>=16){val+=tmp;}returnval;}

時間從3972us降低到了3747us。

10 PTX

我們可以進一步地使用cuobjdump查看其編譯出的PTX代碼,我添加了點注釋,如下:

__device__int32_tScanWarp(int32_tval){int32_tlane=threadIdx.x&31;int32_ttmp=__shfl_up_sync(0xffffffff,val,1);if(lane>=1){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,2);if(lane>=2){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,4);if(lane>=4){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,8);if(lane>=8){val+=tmp;}tmp=__shfl_up_sync(0xffffffff,val,16);if(lane>=16){val+=tmp;}returnval;}時間從3972us降低到了3747us。10PTX我們可以進一步地使用cuobjdump查看其編譯出的PTX代碼,我添加了點注釋,如下://聲明寄存器.reg.pred%p<11>;.reg.b32%r<39>;//讀取參數到r35寄存器ld.param.u32%r35,[_Z8ScanWarpi_param_0];//讀取threadIdx.x到r18寄存器mov.u32%r18,%tid.x;//r1寄存器存儲lane=threadIdx.x&31and.b32%r1,%r18,31;//r19寄存器存儲0mov.u32%r19,0;//r20寄存器存儲1mov.u32%r20,1;//r21寄存器存儲-1mov.u32%r21,-1;//r2|p1=__shfl_up_sync(val,delta=1,0,membermask=-1)//如果srclane在範圍內,存儲結果到r2中,並設置p1為True,否則設置p1為False//r2對應於我們代碼中的tmpshfl.sync.up.b32%r2|%p1,%r35,%r20,%r19,%r21;//p6=(lane==0)setp.eq.s32%p6,%r1,0;//如果p6為真,則跳轉到BB0_2@%p6braBB0_2;//val+=tmpadd.s32%r35,%r2,%r35;//偏移2BB0_2:mov.u32%r23,2;shfl.sync.up.b32%r5|%p2,%r35,%r23,%r19,%r21;setp.lt.u32%p7,%r1,2;@%p7braBB0_4;add.s32%r35,%r5,%r35;...

可以看到,我們可以直接使用__shfl_up_sync生成的p寄存器來做條件加法,從而避免生成的條件跳轉指令,代碼如下:

__device____forceinline__int32_tScanWarp(int32_tval){int32_tresult;asm("{"".reg.s32r<5>;"".reg.predp<5>;""shfl.sync.up.b32r0|p0,%1,1,0,-1;""@p0add.s32r0,r0,%1;""shfl.sync.up.b32r1|p1,r0,2,0,-1;""@p1add.s32r1,r1,r0;""shfl.sync.up.b32r2|p2,r1,4,0,-1;""@p2add.s32r2,r2,r1;""shfl.sync.up.b32r3|p3,r2,8,0,-1;""@p3add.s32r3,r3,r2;""shfl.sync.up.b32r4|p4,r3,16,0,-1;""@p4add.s32r4,r4,r3;""mov.s32%0,r4;""}":"=r"(result):"r"(val));returnresult;}

此外移除依賴的大量shared memory,如下:

__device____forceinline__int32_tScanBlock(int32_tval){int32_twarp_id=threadIdx.x>>5;int32_tlane=threadIdx.x&31;extern__shared__int32_twarp_sum[];//scaneachwarpval=ScanWarp(val);__syncthreads();//writesumofeachwarptowarp_sumif(lane==31){warp_sum[warp_id]=val;}__syncthreads();//useasinglewarptoscanwarp_sumif(warp_id==0){warp_sum[lane]=ScanWarp(warp_sum[lane]);}__syncthreads();//addbaseif(warp_id>0){val+=warp_sum[warp_id-1];}__syncthreads();returnval;}__global__voidScanAndWritePartSumKernel(constint32_t*input,int32_t*part,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){size_tindex=part_i*blockDim.x+threadIdx.x;int32_tval=index<n?input[index]:0;val=ScanBlock(val);__syncthreads();if(index<n){output[index]=val;}if(threadIdx.x==blockDim.x-1){part[part_i]=val;}}}__global__voidAddBaseSumKernel(int32_t*part,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){if(part_i==0){continue;}int32_tindex=part_i*blockDim.x+threadIdx.x;if(index<n){output[index]+=part[part_i-1];}}}//foriinrange(n)://output[i]=input[0]+input[1]+...+input[i]voidScanThenFan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);//usebuffer[0:part_num]tosavethemetricofpartint32_t*part=buffer;//afterfollowingstep,part[i]=part_sum[i]size_tshm_size=32*sizeof(int32_t);ScanAndWritePartSumKernel<<<block_num,part_size,shm_size>>>(input,part,output,n,part_num);if(part_num>=2){//afterfollowingstep//part[i]=part_sum[0]+part_sum[1]+...+part_sum[i]ScanThenFan(part,buffer+part_num,part,part_num);//makefinalresultAddBaseSumKernel<<<block_num,part_size>>>(part,output,n,part_num);}}

此時耗時下降到了3442us。

11 ReduceThenScan

不同於ScanThenFan,其在第一遍每個Part內部做Scan。在這一節中我們將在第一遍只算和,而在最後一步做Scan,代碼如下:

__global__voidReducePartSumKernel(constint32_t*input,int32_t*part_sum,int32_t*output,size_tn,size_tpart_num){usingBlockReduce=cub::BlockReduce<int32_t,1024>;__shared__typenameBlockReduce::TempStoragetemp_storage;for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){size_tindex=part_i*blockDim.x+threadIdx.x;int32_tval=index<n?input[index]:0;int32_tsum=BlockReduce(temp_storage).Sum(val);if(threadIdx.x==0){part_sum[part_i]=sum;}__syncthreads();}}__global__voidScanWithBaseSum(constint32_t*input,int32_t*part_sum,int32_t*output,size_tn,size_tpart_num){for(size_tpart_i=blockIdx.x;part_i<part_num;part_i+=gridDim.x){size_tindex=part_i*blockDim.x+threadIdx.x;int32_tval=index<n?input[index]:0;val=ScanBlock(val);__syncthreads();if(part_i>=1){val+=part_sum[part_i-1];}if(index<n){output[index]=val;}}}voidReduceThenScan(constint32_t*input,int32_t*buffer,int32_t*output,size_tn){size_tpart_size=1024;//tunedsize_tpart_num=(n+part_size-1)/part_size;size_tblock_num=std::min<size_t>(part_num,128);int32_t*part_sum=buffer;//usebuffer[0:part_num]if(part_num>=2){ReducePartSumKernel<<<block_num,part_size>>>(input,part_sum,output,n,part_num);ReduceThenScan(part_sum,buffer+part_num,part_sum,part_num);}ScanWithBaseSum<<<block_num,part_size,32*sizeof(int32_t)>>>(input,part_sum,output,n,part_num);}

為了簡化,我們在代碼中使用cub的BlockReduce,這個版本的耗時為3503us, 略有上升。

之前的算法都存在遞歸,現在我們想辦法消除遞歸,延續ReduceThenScan的想法,只需要我們把Part切得更大一些,比如讓Part數和Block數相等,就可以避免遞歸,代碼如下:

__global__voidReducePartSumKernelSinglePass(constint32_t*input,int32_t*g_part_sum,size_tn,size_tpart_size){//thisblockprocessinput[part_begin:part_end]size_tpart_begin=blockIdx.x*part_size;size_tpart_end=min((blockIdx.x+1)*part_size,n);//part_sumint32_tpart_sum=0;for(size_ti=part_begin+threadIdx.x;i<part_end;i+=blockDim.x){part_sum+=input[i];}usingBlockReduce=cub::BlockReduce<int32_t,1024>;__shared__typenameBlockReduce::TempStoragetemp_storage;part_sum=BlockReduce(temp_storage).Sum(part_sum);__syncthreads();if(threadIdx.x==0){g_part_sum[blockIdx.x]=part_sum;}}__global__voidScanWithBaseSumSinglePass(constint32_t*input,int32_t*g_base_sum,int32_t*output,size_tn,size_tpart_size,booldebug){//basesum__shared__int32_tbase_sum;if(threadIdx.x==0){if(blockIdx.x==0){base_sum=0;}else{base_sum=g_base_sum[blockIdx.x-1];}}__syncthreads();//thisblockprocessinput[part_begin:part_end]size_tpart_begin=blockIdx.x*part_size;size_tpart_end=(blockIdx.x+1)*part_size;for(size_ti=part_begin+threadIdx.x;i<part_end;i+=blockDim.x){int32_tval=i<n?input[i]:0;val=ScanBlock(val);if(i<n){output[i]=val+base_sum;}__syncthreads();if(threadIdx.x==blockDim.x-1){base_sum+=val;}__syncthreads();}}voidReduceThenScanTwoPass(constint32_t*input,int32_t*part_sum,int32_t*output,size_tn){size_tpart_num=1024;size_tpart_size=(n+part_num-1)/part_num;ReducePartSumKernelSinglePass<<<part_num,1024>>>(input,part_sum,n,part_size);ScanWithBaseSumSinglePass<<<1,1024,32*sizeof(int32_t)>>>(part_sum,nullptr,part_sum,part_num,part_num,true);ScanWithBaseSumSinglePass<<<part_num,1024,32*sizeof(int32_t)>>>(input,part_sum,output,n,part_size,false);}

耗時下降至2467us。

12 結語

即使做了很多優化,對比CUB的時間1444us,仍然有較大優化空間。不過本人一向秉承「打不過就加入」的原則,而且CUB也是開源的,後面有時間再深入CUB代碼寫一篇代碼解讀。

參考:https//:www.amazon.com/CUDA-Handbook-Comprehensive-Guide-Programming/dp/0321809467

原文鏈接:https://zhuanlan.zhihu.com/p/423992093

本文僅做學術分享,如有侵權,請聯繫刪文。

3D視覺精品課程推薦:

1.面向自動駕駛領域的多傳感器數據融合技術

2.面向自動駕駛領域的3D點雲目標檢測全棧學習路線!(單模態+多模態/數據+代碼)3.徹底搞透視覺三維重建:原理剖析、代碼講解、及優化改進4.國內首個面向工業級實戰的點雲處理課程5.激光-視覺-IMU-GPS融合SLAM算法梳理和代碼講解6.徹底搞懂視覺-慣性SLAM:基於VINS-Fusion正式開課啦7.徹底搞懂基於LOAM框架的3D激光SLAM: 源碼剖析到算法優化8.徹底剖析室內、室外激光SLAM關鍵算法原理、代碼和實戰(cartographer+LOAM +LIO-SAM)

9.從零搭建一套結構光3D重建系統[理論+源碼+實踐]

10.單目深度估計方法:算法梳理與代碼實現
11.自動駕駛中的深度學習模型部署實戰
12.相機模型與標定(單目+雙目+魚眼)
13.重磅!四旋翼飛行器:算法與實戰

重磅!3DCVer-學術論文寫作投稿交流群已成立

掃碼添加小助手微信,可申請加入3D視覺工坊-學術論文寫作與投稿微信交流群,旨在交流頂會、頂刊、SCI、EI等寫作與投稿事宜。

同時也可申請加入我們的細分方向交流群,目前主要有3D視覺、CV&深度學習、SLAM、三維重建、點雲後處理、自動駕駛、多傳感器融合、CV入門、三維測量、VR/AR、3D人臉識別、醫療影像、缺陷檢測、行人重識別、目標跟蹤、視覺產品落地、視覺競賽、車牌識別、硬件選型、學術交流、求職交流、ORB-SLAM系列源碼交流、深度估計等微信群。

一定要備註:研究方向+學校/公司+暱稱,例如:」3D視覺+ 上海交大 + 靜靜「。請按照格式備註,可快速被通過且邀請進群。原創投稿也請聯繫。

▲長按加微信群或投稿

▲長按關注公眾號

3D視覺從入門到精通知識星球:針對3D視覺領域的視頻課程(三維重建系列、三維點雲系列、結構光系列、手眼標定、相機標定、激光/視覺SLAM、自動駕駛等)、知識點匯總、入門進階學習路線、最新paper分享、疑問解答五個方面進行深耕,更有各類大廠的算法工程人員進行技術指導。與此同時,星球將聯合知名企業發布3D視覺相關算法開發崗位以及項目對接信息,打造成集技術與就業為一體的鐵杆粉絲聚集區,近4000星球成員為創造更好的AI世界共同進步,知識星球入口:

學習3D視覺核心技術,掃描查看介紹,3天內無條件退款
圈裡有高質量教程資料、答疑解惑、助你高效解決問題
覺得有用,麻煩給個贊和在看~
arrow
arrow
    全站熱搜
    創作者介紹
    創作者 鑽石舞台 的頭像
    鑽石舞台

    鑽石舞台

    鑽石舞台 發表在 痞客邦 留言(0) 人氣()