天美麻花星空免费观看乡村版,免费直接进入黄色网站,神马老子午夜电影,亚洲美女视频一区二区三区

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

COMP3340代做、代寫Python/Java程序
COMP3340代做、代寫Python/Java程序

時間:2025-03-15  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



COMP3340 Applied Deep Learning The University of Hong Kong
Assignment 1
Feb 2025
Question 1 - XOR Approximation
We consider the problem of designing a feedforward neural network to approximate the XOR
function. Specifically, for any input points (x1, x2), x1, x2 ∈ {0, 1}, the output of the network
should be approximately equal to x1 ⊕ x2. Suppose the network has two input neurons, one
hidden layer with two neurons, and an output layer with one neuron, as shown in Figure 1.
The activation function for all neurons is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Please provide the specific values for the parameters in your designed network. Demon strate how your network approximates the XOR function (Table 1) by performing forward
propagation on the given inputs (x1, x2), x1, x2 ∈ {0, 1}.
(b) If we need the neural network to approximate the XNOR function (Table 1), how should
we modify the output neuron without altering the neurons in the hidden layer?
x1 x2 x1 ⊕ x2 x1  x2
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1
Table 1: XOR and XNOR Value Table
 !
 "
 #
Figure 1: Network structure and the notation of parameters
COMP3340 Applied Deep Learning The University of Hong Kong
Question 2 - Backpropagation
We consider the problem of the forward pass and backpropagation in a neural network whose
structure is shown in Figure 1. The network parameters is initialized as w1 = 1, w2 = −2,
w3 = 2, w4 = −1, w5 = 1, w6 = 1, b1 = b2 = b3 = 0. The activation function for all neurons
is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Suppose the input sample is (1, 2) and the ground truth label is 0.1. Please compute
the output y of the network.
(b) Suppose we use the Mean Squared Error (MSE) loss. Please compute the loss value for
the sample (1, 2) and its gradient with respect to the network parameters using chain rules.
The final answer should be limited to 3 significant figures.
(c) Suppose we use stochastic gradient descent (SGD) with a learning rate of α = 0.1.
Please specify the parameters of the network after one step of gradient descent, using the
gradient computed in (b). Please also specify the prediction value and the corresponding loss
of the new network on the same input (1, 2).

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

掃一掃在手機打開當(dāng)前頁
  • 上一篇:被小豬應(yīng)急強制下款怎么辦?怎么聯(lián)系米來花客服?
  • 下一篇:CE860代做、代寫C/C++編程設(shè)計
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發(fā)動機性能
    挖掘機濾芯提升發(fā)動機性能
    海信羅馬假日洗衣機亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機亮相AWE 復(fù)古美學(xué)與現(xiàn)代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 酒店vi設(shè)計 deepseek 幣安下載 AI生圖 AI寫作 aippt AI生成PPT 阿里商辦

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045