一個很Q的留言板,下載不后悔 !功能說明: 支持cookies 支持少數UBB(顏色/URL) 支持自動超鏈接 支持表情(自帶56個,最多可擴充到60個) 支持頭像(自帶8個,可在皮膚里增加大小無限制)
上傳時間: 2013-12-25
上傳用戶:er1219
playfair加密解密算法 其中i與q綁定在一起希望大家能夠喜歡
上傳時間: 2013-12-17
上傳用戶:ainimao
臨床醫藥試驗:Gehan s Two Stage Design
上傳時間: 2013-12-10
上傳用戶:qw12
臨床醫藥試驗:Simon s Two stage Design
上傳時間: 2014-01-10
上傳用戶:weiwolkt
*** *** *** *** *** *** ***** ** Two wire/I2C Bus READ/WRITE Sample Routines of Microchip s ** 24Cxx / 85Cxx serial CMOS EEPROM interfacing to a ** PIC16C54 8-bit CMOS single chip microcomputer ** Revsied Version 2.0 (4/2/92). ** ** Part use = PIC16C54-XT/JW ** Note: 1) All timings are based on a reference crystal frequency of 2MHz ** which is equivalent to an instruction cycle time of 2 usec. ** 2) Address and literal values are read in octal unless otherwise ** specified.
標簽: Microchip Routines Sample WRITE
上傳時間: 2013-12-27
上傳用戶:ljmwh2000
Muscl Euler Two dimensions
標簽: dimensions Muscl Euler Two
上傳時間: 2013-12-07
上傳用戶:363186
measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.
標簽: cross-entropy introduce smoothing addition
上傳時間: 2014-01-06
上傳用戶:qilin
現代雷達普遍采用相參信號處理,而如何獲得高精度基帶數字正交( I , Q) 信號是整個系統信號處理成敗的關鍵,以前通常的做法是采用模擬相位檢波器得到I、Q信號,其正交性能一般為:幅度平衡在2 % 左右, 相位正交誤差在2°左右,即幅相誤差引入的鏡像功率在- 34dB 左右。這限制了信號處理器性能的提高, 為此, 近年來提出了對低中頻直接采樣恢復I、Q 信號的數字相位檢波器。隨著高位、高速A/ D 的研制成功和普遍應用,使得數字相位檢波方法的實現成為可能。 對信號進行中頻直接采樣和數字正交處理后,產生的I 支路和Q 支路信號序列在時間上會錯開一個采樣間隔,需要進行定序處理,恢復成同步輸出的I、Q 兩路信號序列。
上傳時間: 2016-12-27
上傳用戶:yxgi5
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
標簽: Levenberg-Marquardt desired network neural
上傳時間: 2016-12-27
上傳用戶:jcljkh
Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully % connected) networks can be trained. % % The activation functions can either be linear or tanh. The network % architecture is defined by the matrix NetDef , which has of two % rows. The first row specifies the hidden layer while the second % specifies the output layer.
標簽: recursive prediction algorithm Gauss-Ne
上傳時間: 2016-12-27
上傳用戶:ljt101007