產(chǎn)生正常分布的sobol sequences,generating of normally distributed quasi uniform distributed: sobol sequences
標(biāo)簽: sequences sobol 正 分布
上傳時(shí)間: 2016-11-14
上傳用戶:diets
數(shù)值計(jì)算算法,包括: (1)Lagrange插值 (2)Newton 插值 (3)求f(x0):秦九韶法 (4)求實(shí)系數(shù)多項(xiàng)式f(z0)。z0為復(fù)數(shù)(5)二分法求f(x)=0的根 (6)弦截法求f (x)=0的根 (7)求實(shí)系數(shù)多項(xiàng)式 方程的實(shí)根、復(fù)根 (8)解線性方程組:Gauss列主元素消去法( 9 )快速弗利葉變換(FFT)
標(biāo)簽: 數(shù)值計(jì)算 算法
上傳時(shí)間: 2016-11-15
上傳用戶:a3318966
包括二分法,Newton下山法和improved Newton迭代法
標(biāo)簽: 分
上傳時(shí)間: 2016-11-19
上傳用戶:yoleeson
即使對(duì)于一個(gè)簡(jiǎn)單的電力系統(tǒng),潮流計(jì)算也不是一件簡(jiǎn)單就可以完成的事,其運(yùn)算量很大,因此如果對(duì)于一個(gè)大的、復(fù)雜的電網(wǎng)來說的話,由于其節(jié)點(diǎn)多,分支雜,其計(jì)算量可想而知,人工對(duì)其計(jì)算也更是難上加難了。特別是在現(xiàn)實(shí)生活中,遇到一個(gè)電力系統(tǒng)不會(huì)像我們期望的那樣可以知道它的首端電壓和首端功率或者是末端電壓和末端功率,而是只知道它的首端電壓和末端功率,更是使計(jì)算變的頭疼萬分。為了使計(jì)算變的簡(jiǎn)單,我們就可以利用計(jì)算機(jī),用C語言編程來實(shí)現(xiàn)牛頓-拉夫遜(Newton-Raphson)迭代法,最終實(shí)現(xiàn)對(duì)電力系統(tǒng)潮流的計(jì)算。
標(biāo)簽: 電力系統(tǒng)
上傳時(shí)間: 2016-12-26
上傳用戶:xieguodong1234
This function calculates Akaike s final prediction error % estimate of the average generalization error. % % [FPE,deff,varest,H] = fpe(NetDef,W1,W2,PHI,Y,trparms) produces the % final prediction error estimate (fpe), the effective number of % weights in the network if the network has been trained with % weight decay, an estimate of the noise variance, and the Gauss-Newton % Hessian. %
標(biāo)簽: generalization calculates prediction function
上傳時(shí)間: 2014-12-03
上傳用戶:maizezhen
This function calculates Akaike s final prediction error % estimate of the average generalization error for network % models generated by NNARX, NNOE, NNARMAX1+2, or their recursive % counterparts. % % [FPE,deff,varest,H] = nnfpe(method,NetDef,W1,W2,U,Y,NN,trparms,skip,Chat) % produces the final prediction error estimate (fpe), the effective number % of weights in the network if it has been trained with weight decay, % an estimate of the noise variance, and the Gauss-Newton Hessian. %
標(biāo)簽: generalization calculates prediction function
上傳時(shí)間: 2016-12-27
上傳用戶:腳趾頭
Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully % connected) networks can be trained. % % The activation functions can either be linear or tanh. The network % architecture is defined by the matrix NetDef , which has of two % rows. The first row specifies the hidden layer while the second % specifies the output layer.
標(biāo)簽: recursive prediction algorithm Gauss-Ne
上傳時(shí)間: 2016-12-27
上傳用戶:ljt101007
研一剛上完數(shù)值分析,自己寫了幾個(gè)算法的子函數(shù),可以直接調(diào)用,參數(shù)的含意在文件中有說明,這五個(gè)算法分別是:拉格朗日插值,hermite插值,Newton插值,修正hamming算法,龍貝格加速算法。希望能夠?qū)Υ蠹矣兴鶐椭?/p>
標(biāo)簽: 數(shù)值分析
上傳時(shí)間: 2014-06-16
上傳用戶:氣溫達(dá)上千萬的
阻尼最小二乘法(即Levenberg-Marquarat算法),是Gauss-Newton算法的一種修正法。
標(biāo)簽: Levenberg-Marquarat 阻尼 最小二乘法 算法
上傳時(shí)間: 2014-02-19
上傳用戶:無聊來刷下
常用數(shù)值算法的C++源碼,包括三次樣條插值、二重積分計(jì)算、常微分初值問題數(shù)值解、Newton迭代法、列主元高斯消去法、逐次超松弛迭代法。
上傳時(shí)間: 2013-12-28
上傳用戶:jkhjkh1982
蟲蟲下載站版權(quán)所有 京ICP備2021023401號(hào)-1