The VGA example generates a 320x240 diffusion-limited-aggregation (DLA) on Altera DE2 board. A DLA is a clump formed by sticky particles adhering to an existing structure. In this design, we start with one pixel at the center of the screen and allow a random walker to bounce around the screen until it hits the pixel at the center. It then sticks and a new walker is started randomly at one of the 4 corners of the screen. The random number generators for x and y steps are XOR feedback shift registers (see also Hamblen, Appendix A). The VGA driver, PLL, and reset controller from the DE2 CDROM are necessary to compile this example. Note that you must push KEY0 to start the state machine.
標(biāo)簽: diffusion-limited-aggregation DLA generates 320x240
上傳時間: 2014-01-16
上傳用戶:225588
本代碼包為本人的一篇文章<一個占用內(nèi)存極少的菜單系統(tǒng)的實現(xiàn)>在在PC上的測試移植代碼。 ------------------------------ Menu_Src目錄為Menu的源代碼 Ks0108.C的void Display_Locate(unsigned char DisplayData, unsigned char X, unsigned char Y)函數(shù)為最底層的顯示函數(shù)。 該函數(shù)調(diào)用LCD模擬函數(shù)來完成顯示。 KeyScan.C的unsigned char KeyScan(void)函數(shù)為鍵盤模擬函數(shù)。 void DelayMs( WORD time ) 延時 ------------------------------ GUI_SIM.exe為編譯后的文件,可以直觀看到這個GUI的效果. PC鍵盤的4個按鍵控制菜單周轉(zhuǎn): PC按鍵 菜單中功能 up 向上鍵 確定鍵 進(jìn)入子菜單 down向下鍵 取消鍵 返回父菜單 left向左鍵 向上鍵 菜單項上一項 right向右鍵 向下鍵 菜單項下一項 有興趣自己編譯VC工程:\Project\Menu.dsw <一個占用內(nèi)存極少的菜單系統(tǒng)的實現(xiàn)>相關(guān)PDF文檔和其他資料在以下鏈接: http://www.ouravr.com/bbs/bbs_content.jsp?bbs_sn=798580&bbs_page_no=3&bbs_id=9999
上傳時間: 2014-06-24
上傳用戶:stvnash
計算二日期的間隔天數(shù),計算某日期為星期幾,打印對象當(dāng)前數(shù)據(jù)的y年m月的月歷,一次增加若干天,對兩個日期進(jìn)行其他比較運算等。
標(biāo)簽: 計算
上傳時間: 2016-12-18
上傳用戶:
視頻編碼電路主要實現(xiàn)接收8位CCIR656格式的YUV數(shù)據(jù),(例如MPEG解碼數(shù)據(jù)),并編碼成亮度Y和色度信號C,以及合成CVBS信號,經(jīng)過D/A轉(zhuǎn)換后輸出。基本的編碼功能包括副載波產(chǎn)生,色差信號調(diào)制,同步信號內(nèi)插。 主要應(yīng)用在視頻處理,軍事圖像處理。 GM7221設(shè)計原理圖
上傳時間: 2013-12-29
上傳用戶:Divine
單片機教案,對初學(xué)者很有幫組。單片機教案,對初學(xué)者很有幫組
標(biāo)簽: 教案
上傳時間: 2013-12-15
上傳用戶:exxxds
//下面是畫圓的程序, //畫線、畫圓、畫各種曲線其實都很簡單,歸根到底就是x、y的二元方程嘛 //對算法感興趣的話建議去找本《計算機圖形學(xué)》看看,不是賣關(guān)子哦。實在是幾句話說不清除,呵呵 // ---------------------------------------------- //字節(jié) void circleDot(unsigned char x,unsigned char y,char xx,char yy)//內(nèi)部函數(shù),對稱法畫圓的8個鏡像點 {//對稱法畫圓的8個鏡像點
標(biāo)簽: 程序
上傳時間: 2014-01-07
上傳用戶:秦莞爾w
實驗題目:Hermite插值多項式 相關(guān)知識:通過n+1個節(jié)點的次數(shù)不超過2n+1的Hermite插值多項式為: 其中,Hermite插值基函數(shù) 數(shù)據(jù)結(jié)構(gòu):三個一維數(shù)組或一個二維數(shù)組 算法設(shè)計:(略) 編寫代碼:(略) 實驗用例: 已知函數(shù)y=f(x)的一張表(其中 ): x 0.10 0.20 0.30 0.40 0.50 y 0.904837 0.818731 0.740818 0.670320 0.606531 m -0.904837 -0.818731 -0.740818 -0.670320 -0.606531 x 0.60 0.70 0.80 0.90 1.00 y 0.548812 0.496585 0.449329 0.406570 0.367879 m -0.548812 -0.496585 -0.449329 -0.406570 -0.367879 實驗用例:利用Hermite插值多項式 求被插值函數(shù)f(x)在點x=0.55處的近似值。建議:畫出Hermite插值多項式 的曲線。
上傳時間: 2013-12-24
上傳用戶:czl10052678
Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %
標(biāo)簽: back-propagation corresponding input-output algorithm
上傳時間: 2016-12-27
上傳用戶:exxxds
This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %
標(biāo)簽: generalization calculates prediction function
上傳時間: 2014-12-03
上傳用戶:maizezhen
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
標(biāo)簽: Levenberg-Marquardt desired network neural
上傳時間: 2016-12-27
上傳用戶:jcljkh
蟲蟲下載站版權(quán)所有 京ICP備2021023401號-1