Tuesday 27 July 2021

Powershell script to check for directory and email if it doesnt exist

Simple script to send an email if a directory or mapped drive doesnt exist. This uses SMTP authentication.

$password = ConvertTo-SecureString 'EnterPassword!' -AsPlainText -Force

$credential = New-Object System.Management.Automation.PSCredential ('username', $password)


if (Test-Path -Path P:\) {

    "Path exists!"

} else {

    send-MailMessage -SmtpServer smtp.office365.com -port 587 -UseSSL -To youremail@organization.com -From senderaddress@organiszation.com -credential $credential -Subject "Your Subject" -Body "Body Contents" -BodyAsHtml -Priority high  

}

 

Friday 23 July 2021

Setup Single Sign On for AWS Accounts using Azure AD

 

Setup the SAML link between Azure AD and AWS


The following is taken from https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial​ but has been cut down to just the relevant bits and steps rearranged into a more logical order.

  1. Login to Azure - https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId
  2. Click New Application 
  3. Search for 'aws single account access​', click the tile and click 'Create'.
  4. Once it has finished creating the application, click the properties tab on the left. Change the name to reflect the account ID that you are connecting to. For example 'AWS Prod Account ID123456789​'
  5. ​Select 'Single sign-on' from the left menu and choose 'SAML'. Select 'No, I'll save later' to the prompt that appears
  6. ​Edit the 'Basic SAML Configuration' section. The Entity ID must be unique. If this is your first SSO link to AWS then you can use the defaults. If you are setting up multiple accounts you must add a hash with the next available number. All configured accounts are details at the bottom. Click 'Save' and close when this is populated

  7. Edit the 'SAML Signing Certificate' section. Click 'New Certificate'. In the Notification Email Addresses section, remove any emails address that are auto populated and add the address you want to receive expiry notifications to. ​Click 'Save'. Click the 3 dots next to the thumbprint of the certificate you've just made (the one with the latest expiration date). Click 'Download federated certificate XML and save it to your computer (you'll need this later). Click the 3 dots again and click 'Make certificate active'. Click save and close the window.

  8. Open the AWS Management Console and log into the account you are setting up with SSO. 

  9. Search for the IAM service and open it. Click Identity providers then 'Add provider'. Leave SAML selected. Enter AzureAD as the provider name. Click 'Choose file' under Metadata document and find the XML file downloaded previously. Click 'Add provider' at the bottom.

  10. Open 'Roles' in the left menu and click 'Create Role'. Click SAML 2.0 federation along the top. In the SAML provider dropdown, select AzureAD. Tick 'Allow programmatic and AWS Management Console access then click 'Next: Permissions'. Select the permissions required for the role you are creating. You can repeat this step and create as many roles as required to map to your 365 groups/users. Click 'Next:Tags' then 'Next: Review. Give the role a meaningful name that describes the name/purpose of the AWS account and the role you have created within it for example Dev-Admin. In the role description box, briefly describe what this role is for, for example 'Azure AD mapping for admin role​'. Click 'Create role'.​​

  11. Open 'Policies' in the left menu and click 'Create Policy'. Click the JSON tab. Replace the contents with the following: 

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                "iam:ListRoles"
                ],
                "Resource": "*"
            }
        ]
    }


  12. Click 'Next:Tags', click 'Next:Review'. In the name field, enter AzureAD_SSOUserRole_Policy. For Description, enter This policy will allow to fetch the roles from AWS accounts. Click 'Create Policy'. 

  13. Open 'Users' in the left menu and click 'Add user'. Enter the user name as AzureADRoleManager. Select 'Programmatic access'. Click 'Next:Permissions'. Select 'Attach existing policies directly' from the top​. Search for the newly created policy in the filter section AzureAD_SSOUserRole_Policy and select it. Click 'Next:Tags', 'Next:Review', 'Create user'. Copy the Access key ID and Secret access key somewhere safe.

  14. ​Go back to the Azure portal. Click 'Provisioning' within the app you made earlier. Click 'Get started'. Change the Provisioning Mode to Automatic. In the 'clientsecret' box, enter the ​Access key ID. In the 'Secret Token' box, enter the Secret access key. Click Test Connection to make sure it works. Click Save and close the window. 
  15. Go to https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps and select 'All applications'. Click on the application you have just added. Click 'App roles' on the left. Click 'Create app role'. Give it a relevant name for example 'Admins'. Under 'Allowed member types' select Users/Groups. In the Value field, enter the name of the role you created in step 10. Give it a relevant description for example 'Grants users the Dev-Admin role​'.
  16. Click 'Users and groups' on the left hand menu then click Add user/group. Select the user/group you want to give access to. Try to keep this to using 365 groups such as 'AWS Admins). Under 'Select a role', choose the role you created above. Be sure to select the one that corresponds to the role name in AWS for example Dev-Admin. Click 'Assign'. If you don't see your role in here, give it up to 40 minutes to resync.

CLI Access

​​​

The following is taken from https://github.com/Versent/saml2aws and https://github.com/Versent/saml2aws/blob/master/doc/provider/aad/README.md​​

As well as the AWS CLI, Install saml2aws using this link - https://github.com/Versent/saml2aws​ 

Run the command vi .saml2aws and insert the contents below into the file:

[prod]
name                    = default
app_id                  = the application id in Azure
url                     = https://account.activedirectory.windowsazure.com
username                = youremail@address.com
provider                = AzureAD
mfa                     = Auto
skip_verify             = false
timeout                 = 0
aws_urn                 = urn:amazon:webservices
aws_session_duration    = 3600
aws_profile             = prod
resource_id             =
subdomain               =
role_arn                = arn:aws:iam::accountnumber:role/rolename
region                  = eu-west-1
http_attempts_count     =
http_retry_delay        =
credentials_file        =
saml_cache              = false
saml_cache_file         =
target_url              =
disable_remember_device = false
disable_sessions        = false


[dev]
name                    = default
app_id                  = the application id in Azure
url                     = https://account.activedirectory.windowsazure.com
username                = youremail@address.com
provider                = AzureAD
mfa                     = Auto
skip_verify             = false
timeout                 = 0
aws_urn                 = urn:amazon:webservices
aws_session_duration    = 3600
aws_profile             = dev
resource_id             =
subdomain               =
role_arn                = arn:aws:iam::accountnumber:role/rolename
region                  = eu-west-1
http_attempts_count     =
http_retry_delay        =
credentials_file        =
saml_cache              = false
saml_cache_file         =
target_url              =
disable_remember_device = false
disable_sessions        = false

You can then launch your terminal and run saml2aws login -a prod or saml2aws login -a dev. This will prompt for your AD password then save a set of cached credentials for CLI access. You can then run CLI commands in this format - aws --profile prod ec2 describe-instances or aws --profile dev ec2 describe-instances ​​

You can setup additional profiles in the .\saml2aws file by using the above examples and changing out the app_id, aws_profile and role_arn. These values must correspond to the APP ID in Azure, have a unique aws_profile value which describes the account and have a role_arn that matches the role name in Azure/AWS.

Friday 26 March 2021

Cisco ASA BGP VPN to AWS

ASA Basics

Login credentials are in lastpass. The superuser in the ASDM is enable_15. Once logged it you should create your own login by going to Configuration > Device Management > Users/AAA > User Accounts. Add yourself a user in here with Privilege Level 15. You will need to do this before you can log in using SSH. Once you have a user, always use this rather than enable_15

You can SSH the box by using any SSH client and going to 172.16.1.252. Login with your own user. You will then most likely need to elevate your permissions by running the command enable then entering the enable_15 users password. Once you are in as 'enable', you can run conf t to go into 'configure terminal' mode followed by commands like sh bgp summ - see bgp neighbours status or sh bgp neighbour - more detailed. This information can also be found in the ASDM though

Always take a backup before making changes. This can be done from Tools > Backup Configurations. Click 'Browse Local' and pick where you want to store it. 

Whenever you make a change and apply it, you need to click save at the top. It will prompt you to save when you exit the ASA. If you don't save it, the change won't persist through a reboot unless you specifically tell it to when scheduling the reboot)

To do a reboot, go to Tools > System Reload. From here you can choose to schedule the reload (reload is cisco terminology for restart/reboot). If you leave the options as default and click the 'Schedule Reload' button it will do it immediately

BGP is a routing protocol short for Border Gateway Patrol. The AWS tunnels when created to be dynamic and not static use BGP. This allows for one of the two tunnels that it creates to go down and automatically advertise this to our VPN endpoint which will direct all traffic down the tunnel which is up. AWS do maintenance on their tunnels without warning so it is essential to use dynamic routing. 




Tunnel configuration in AWS:

In AWS, go to Services > VPC and under Virtual Private Network (VPN), select Site-to-Site VPN Connections. Here you will see the configuration for the AWS side of the tunnel. It is comprised of a customer gateway (our Cisco ASA) and a virtual gateway (the AWS endpoint). In this screen you can download the configuration which actually gives you the list of commands you can paste into the ASA. A few changes need to happen to this configuration before you do that though:

Do a find and replace on 'outside_interface' and change it to the name of the outside interface on the ASA, typically GigabitEthernet1/1 (include the quotes around outside_interface). 

Find and replace crypto ikev1 policy 200 and crypto ikev1 policy 201 with with the next available numbers if you have existing policies/tunnels on your device (this needs to be incremented to an unused number). 

Find and replace interface Tunnel1 and interface Tunnel2 with the next available numbers if you have existing tunnels on your device (this needs to be incremented to an unused number).  (this needs to be incremented to an unused number). 

The next line below each of the Tunnel lines above reads nameif Tunnel-*** (AWS assigned ID). This ID should be changed to something a bit more meaningful to us to make administration on the ASDM interface easier. 

Finally, the config doesn't come with any routing information which must be added be copying the below into the end of the config file and editing it

prefix-list nameoftunnel-IPV4-BGP-IN seq 5 permit IP CIDR Block in AWS
prefix-list nameoftunnel-IPV4-BGP-OUT permit Office Network
route-map PASS permit 10
route-map nameoftunnel-IPV4-BGP-IN permit 10
match ip add prefix-list nameoftunnel-IPV4-BGP-IN
route-map nameoftunnel-IPV4-BGP-OUT permit 10
match ip add prefix-list nameoftunnel-IPV4-BGP-OUT

router bgp 65000
address-family ipv4 unicast
neighbor 169.254.105.45 remote-as 64512
neighbor 169.254.105.45 activate
neighbor 169.254.105.45 route-map AWSProd-IPV4-BGP-IN in
neighbor 169.254.105.45 route-map AWSProd-IPV4-BGP-OUT out
neighbor 169.254.50.141 remote-as 64512
neighbor 169.254.50.141 activate
neighbor 169.254.50.141 route-map AWSProd-IPV4-BGP-IN in
neighbor 169.254.50.141 route-map AWSProd-IPV4-BGP-OUT out
redistribute connected route-map PASS
redistribute static route-map PASS
no auto-summary
no synchronization


Find and replace nameoftunnel and change it to something relevant to the new tunnel for example TestEnvironment

Find and replace IP CIDR Block in AWS with the CIDR range of the VPC in AWS (VPC, not subnet)

Find and replace Office Network with the CIDR block of the network you want to allow access from (where your ASA is)

Find and replace 169.254.105.45 with the IP address of the BGP neighbor in the first section 4 of the AWS configuration 

Find and replace 169.254.50.141 with the IP address of the BGP neighbor in the second section 4 of the AWS configuration


With your completed configuration file , you can now copy it and paste it into the ASDM software by going to Tools > Command Line Interface and changing it to Multiple Line mode. Once pasted in, click Send and it will run it against the ASA. Close this window and wait 10-15 seconds and you will be prompted to refresh the configuration which will now be live. The tunnel can take a few minutes to establish. 




Additional configuration required in AWS

In AWS, you will need to edit the route table for the subnets to route traffic for your office network to the Virtual Gateway that is configured for use by the VPN tunnel

You will also need to configure the security groups for any EC2 instances to allow the traffic you want (RDP/ICMP for example)





Additional configuration needed at the office network

You will need to configure a static route on your router to route traffic to the AWS subnet to the ASA


Creating and Editing AWS VPC's, Subnets, Gateways and Route Tables

 

Creating a new VPC

VPC's are where everything in AWS lives. It stands for Virtual Private Cloud. 

To create a new VPC, search for VPC in the console, go to 'Your VPCs' and click 'Create VPC'.

Give the VPC a meaningful name. Next is assigning the IPv4 CIDR block. This is IP range that you can later create subnets within (more on that later). It is a good idea to reserve as big a block as possible for this as you cannot extend these. You can however add an additional CIDR block to the VPC but depending on what you have specified for the starting block, you may be unable to continue on directly from that. I typically create a /16 VPC for example 10.30.0.0/16. This gives you IP's starting at 10.30.0.1 to 10.30.255.255. 

There is no need to create an IPv6 CIDR block at this time. 

Tenancy can be left as Default.

There is no need to add Tags unless you want to use this for billing tracking. 

Click on Create VPC and it will be created.


Creating a new subnet

In the left hand menu under where you selected 'Your VPCs', you will see 'Subnets'. Open this and click 'Create subnet'.

Here you will be asked to select a VPC. Select the VPC you want to add this subnet too and the rest of the settings will become visible. 

To allow for maximum resiliency, it is a good idea to create a public and a private subnet in each Availablity Zone (eu-west-1a, eu-west-1b, eu-west-1c). To allow you to easily identify which subnet is in which AZ, name them accordingly: Public Subnet A, Public Subnet B, Public Subnet C, Private Subnet A, Private Subnet B, Private Subnet C.  

Select the Availability Zone that corresponds with the name.

For the IPv4 CIDR block, bear in mind that you want to create 6 subnets out of the CIDR block you assigned to the VPC. It's a good idea not to wildly overprovision these but at the same time you don't want to be too restrictive as you cannot extend subnets. You would have to create new ones and it can get messy. Use the tool https://www.ipaddressguide.com/cidr to work out what CIDR blocks to assign. /20 isn't a bad place to start as it will give you 4096 IP addresses per subnet. Using that tool, you can work out that if you make your first subnet 10.30.0.0/20, the final IP address in that range is 10.30.15.255. This means your 2nd subnet can start at 10.30.16.0/20. The last IP in this block is 10.30.31.255 so the next can start at 10.30.32.0/20 and so on.

When you have put in the IPv4 CIDR block for your first subnet, click 'Add new subnet' and keep going until you have made all the subnets you require. When you are done, click 'Create subnet'.


NAT Gateways

With the VPC made and the subnets created, the only thing stopping you from spinning up machines is setting up routing and internet access. 

In the last step we made both public and private subnets. The reason being that we want to ensure that only things that need to be pubically exposed go into the public subnet and anything that should remain internal only goes into the private subnet. What makes a subnet public or private is the routing. 

In the left hand menu, click NAT Gateways. Here we are going to create a NAT gateway in each public subnet. These NAT gateways are going to be used by the private subnets to allow them to route out to the internet. Click 'Create NAT gateway'. 

Give it a name relating to the subnet that you are creating it in for example NAT GW Pub A. 

Select the PUBLIC subnet you are creating it in. 

Next, click Allocate Elastic IP which will create an IP address and assign an IP address. 

Now, click 'Create NAT gateway'. Repeat these steps for the remaining public subnets.


Internet Gateways

NAT Gateways are the mechanism used to give internet access to private (internal) services. Things that live in the public subnet use an Internet Gateway. To create this, select 'Internet Gateways' in the VPC menu and click 'Create Internet gateway;

Very simple this time, just give it a name and click 'Create Internet gateway'. This isn't tied to specific subnet or AZ so can be called anything. Just try to call it something that relates to the VPC for example 'MyFirstVPC Internet Gateway'.


Route Tables

With your VPC, subnets, NAT gateways and Internet gateway all created its time to tie them all together. In the VPC menu, click 'Route Tables'.

There is typically no need for multiple routing tables for each public subnet but you will need a separate route table for each private subnet in order to control which NAT gateway it routes internet traffic to. For example there would be no point in routing internet traffic for private subnet A to public subnet B. While it would work, it is bad for resiliency as if AZ B went offline, your private subnet A would lose internet access. Instead, you need to route internet traffic for each private subnet to the NAT gateway in its corresponding public subnet.

Click 'Create route table' to get started.

Give it a name. For the public route table just call it 'VPC name Public Route Table' or something like that. For the private subnets, call it something like 'VPC name Private Subnet A' and so on. Select the VPC. 

You will now see it in the list of route tables. Tick the one you want to edit and you will see some tabs appear at the bottom. The first tab to edit is 'Routes'. For the public subnet, you need to add a route to the internet gateway you created. To do this, click 'Edit routes' 

Add the route 0.0.0.0/0 and select the internet gateway as the target. Click Save routes. 

Now you need to associate the subnets you created to this route table. Go to Subnet Associations tabs and click 'Edit subnet associations'. 

Tick the public subnets and click 'Save'.

Now you need to create a route table for each private subnet. Follow the same steps as above but name them accordingly for each subnet. When editing the routes, add a route to 0.0.0.0/0 selecting as the target the NAT Gateway you created in the corresponding subnet. When editing the Subnet Associations, remember to only associate the private subnet corresponding to the name of that route table and the NAT gateway in the public subnet that it routes to.


Editing existing VPCs/Subnets

In the event of running short on IP addresses in subnets, you can add more. Ideally there will be scope within the existing VPC if a large enough CIDR block was created. If this is the case, you will need to review the subnets in that VPC and see where the last one ends. You can then add more subnets in the same way as it explains above when creating new ones. If its only extra private IP's you need, there is no need to create additional public subnets. You can leverage the NAT gateways in the existing public subnets. Once a new subnet has been made, you need to edit the 'Subnet Associations' in the route table for that private subnet in the same way we set it up above. Just add your new subnet into the relevant route table, making sure to associate them to correct Availabilty Zones. If its public subnets you need, just add them to the existing public subnet route table in its 'Subnet Associations;. 

In the event that there is no scope within the existing VPC to add additional subnets, you will need to add an additional CIDR block into it. To do this, go to 'Your VPCs' in the VPC menu, tick the VPC you want to extend and click 'Actions'. Click 'Edit CIDRs'.

In here, click 'Add new IPv4 CIDR. 

Specify the size of the CIDR block you want. If the previous one had scope to use a part of that block, you could continue on from that (use https://www.ipaddressguide.com/cidr to work it out). Otherwise you could choose a new class B address for example 10.31.0.0/16. Save this.

Once you have added this into the VPC, you can go back to Subnets and create your subnets as per the above guide. You can then associate them with the existing route tables.

Enable Sharepoint Online Site Scoped Publishing Features

 There appears to be a known issue when trying to active SharePoint Server Publishing Infrastructure within the GUI of Sharepoint Online. This would usually be achieved through Site Settings > Site Collection Administration > Site Collection Features but the page can hang and timeout citing "Sorry, something went wrong Save Conflict. Your changes conflict with those made concurrently by another user. If you want your changes to be applied, click Back in your Web browser, refresh the page, and resubmit your changes."

The workaround to this is to do it via powershell. I had the script below supplied to me by Microsoft after raising a support ticket.

The first thing to do is to enable the PnpPowershell module by running the command below

  • Install-Module SharePointPnPPowerShellOnline
With the module installed, edit the script below to reflect the name of your Sharepoint Online site in the $SiteURL variable


#Config Variable
$SiteURL = "https://yoursite.sharepoint.com/sites/sitename"
$FeatureId = "f6924d36-2fa8-4f0b-b16d-06b7250180fa" #Site Scoped Publishing Feature
 
#Connect to PNP Online
Connect-PnPOnline -Url $SiteURL -Credentials (Get-Credential)
 
#get the Feature
$Feature = Get-PnPFeature -Scope Site -Identity $FeatureId
 
#Get the Feature status
If($Feature.DefinitionId -eq $null)
{   
    #sharepoint online powershell enable feature
    Write-host -f Yellow "Activating Feature..."
    Enable-PnPFeature -Scope Site -Identity $FeatureId -Force
 
    Write-host -f Green "Feature Activated Successfully!"
}
Else
{
    Write-host -f Yellow "Feature is already active!"
}


Save the script and execute it from Powershell and this will enable the feature.

Thursday 21 January 2021

Replace ADFS Service Communication SSL Certificate ADFS 3.0

  1. Log onto the AD FS server and from the certificates MMC snap in, import the new certificate to the server into the Personal certificate store. Right click Certificates item and select All Tasks > Import option. Import your PFX bundle.
  2. Right click the new certificate and select All Tasks > Manage Private KeysAssign read permission to the service account used to run the AD FS service and click OK.
  3. Launch the AD FS Management Console, expand the Service menu in the left pane and click Certificates. Click the link Set Service Communications Certificate to set the new certificate. Select the valid certificate and click OKClick OK to close the message. The  certificate under Service communications has been updated.
  4. Right click the new imported SSL certificate and select OpenSelect Details tab, find the Thumbprint for the new certificate and copy it, removing any spaces. From PowerShell run the command Set-AdfsSslCertificate –Thumbprint <ThumbprintID>
  5. Restart the ADFS service on the server 
Update the Web Application Proxy Server

  1. Log onto the WAP server and import the new certificate as per the above steps
  2. Open PowerShell and run the command Set-WebApplicationProxySslCertificate –Thumbprint <ThumbprintID>

Thursday 14 January 2021

Replacing SSL certificates on exchange 2013

 

  • Copy pfx file to exch03
  • Open exchange powershell as admin and run “certutil -csp "Microsoft RSA SChannel Cryptographic Provider" -importpfx name_of_file.pfx” – failing to import like this will and doing it through the GUI may lead to a loop when logging into ECP/OWA
  • Assign SMTP and IIS services in ECP > Servers > Certificates, overwriting old certificate. 
  • From a command prompt as admin, run “iisreset” – this will interrupt your exchange services
  • Delete old certificate. If it complains that IIS/SMTP services are still in use, run this powershell to enable those services on your new certificate "Enable-ExchangeCertificate -Thumbprint <Thumbprint> -Services 'iis,smtp' 
  • If it complains that it is in use on a send connector when trying to delete the old certificate, follow these steps:

  1. From ECP, open the certificate you want to use and note the thumbprint
  2. In exchange powershell run “$cert = Get-ExchangeCertificate -Thumbprint <thumbprint>”
  3. Set a new variable and assign it the concatenated values of the Issuer and Subject values of the certificate (must also include <I> and <S> before each field):
    $TLSCert = (‘<I>’+$cert.issuer+'<S>’+$cert.subject)
  4. Update the send connector with the new values
    Set-SendConnector -Identity “sendconnectorname” -TLSCertificateName $TLSCert