The Scenario

I have worked with a QuickBooks (Enterprise) customer for years. Not only did their company file grow in size to get quite large, but they were unable to upgrade to newer versions, still locked on 2019. So part of getting this backup problem fixed was cleaning up the company file. So a little digression on those fixes first.

The Fixes

Over the years and several different bookkeepers, several issues were created in the company file:

  • When a tax rate changed, instead of creating a new sales tax item, the rate on the existing item was changed. This meant that all sales orders and invoices that had correctly used the original rate now were using a sales tax item rate that didn’t match.
  • When a U/M change was required from an inventory item, the U/M was just changed for the item. What should have been done is to modify the U/M for the item to add the new measure.
  • Especially for walk-in customers, when an item was out of stock – at least according to inventory on hand – it was invoiced anyway. Procedures were modified a while back to start with sales orders and only invoice for actual items on hand.

Utilities to the Rescue

Each month, there are hundreds of sales orders and invoices, so fixing items one by one would have been more than onerous. Fortunately, I found a handful of utilities from https://q22.us that were helpful if not essential. One in particular, their mass update utility, allowed me to make changes to a large number of records at once. So fixing the issues took days (still a lot of them) instead of months and months.

Next Steps

With the file’s data repaired, I was able to successfully rebuild then condense the data. Following that I was able to upgrade from 2019 directly to 2021. Yippee!

Not So Fast..

Unfortunately, backup stopped working. Whether trying to do a Save Now or schedule a future backup, it would fail without providing any detail: it just didn’t work. Finally, it occurred to me that there is a backup log, and I decided to look there. To get to it, I pressed F2 then F3 . On the F# screen, click on the Open File tab, select QBBACKUP.LOG file from the list that appears, and then open the file.

What I found was that the backups were failing with a SQL -82 error – invalid character in database name.

Humm, why had this been working for years but now wasn’t?

The company file name….

When the file was first set up (2005!), the name of the business was entered as <name> CO. What QB did was to add .qbw to that file name, producing <nameCO..qbw. Note the double periods. When backup tries to start the file, it actually creates an alias that puts a space between the two periods so the alias appears to have a space in it. Not allowed and creates the error.

How to Change the File Name

Open Windows File Explorer, then navigate to the location where the company file is stored. Right click on the .qbw file and choose rename; you can enter a completely different name or simply remove the extra period. The other files associated with the .qbw will have their names changed as well. From then on, backup should work just fine. Or if it fails, it is related to data in the file which can be rebuilt using the utilities.

Oh, just make sure QB is closed on all devices that access the company files. You do not have to stop the QB 31 service but if using !B 2019, you will have to stop the !B 29 service.


The Scenario

Two Windows 2019 servers, both DCs, both Hyper-V virtual computers running on the same host. They were originally installed as 2012 then upgraded to 2012 R2, 2016 and finally 2019. As you can guess, around for a while.

Nothing really changed on the host nor VM computers. However, the office was physically moved, and in setting up the new network, a new router was also set up but have rather identical settings to its predecessor. When the host was started, everything seemed to start up perfectly, and all member computers on the domain seemed fine too.

Then an Error Appeared

Full disclosure. The error had probably been appearing for a while, but I only recently noticed it. What tipped me off was not having access from a domain workstation to a NAS device’s folders which had domain permissions. When I remoted into the DCs to investigate, the Kerberos errors appeared in Server Manager.

And so did lots of others – primarily related to DNS and DHCP, DNS zone transfers were not taking place, either server could not manage the other’s settings. Because I had configured DHCP for failover, the same was true for DHCP as DNS, no management or replication.

Tring to Find Error Resolution

The first thing I tried was to restore the .vhdx files for the Hyper-V machines from prior backups. I had about 2-1/2 weeks of backups available, but going back to the oldest did not change the issues. At least I knew that I could screw around with them but could always restore them to at least the oldest state.

Searching for Kerberos Security Error did not yield any meaningful results. Same with searching for the event ids showing in the system logs. I posted questions on a few support forums but got a whopping no responses. I was on my own.

Network Tools That Saved the Day

I first used nslookup and was a bit surprised to get unkown server on one of the DCs but a correct tresponse on the oyther. I began referring to these as “bad” and “good” servers, which was fortunate as it turned out.

Next I ran dcdiag on both servers, and each failed with the Kerberos Security Error problem. That spurred anew round of web searches and yielded nothing of value. It did cause me to play around with DHCP a bit, a correction I would later have to fix and will cover later.

When I more carefully read the dcdiag messages, it caused me to once again focus intently on DNS. I could make no progress dealing with DNS manager, and I was loathe to try and manipulate the DNS zone values. So for about the umpteenth time I looked at the NIC DNS settings.

On each server, I had the first DNS server as that server and the second as the other server, so each pointed to itself. I just could not see a problem.

I then did an ipconfig /all on both servers to compare them, thinking some value was incorrect but I had just missed it. It must have been a half hour later I looked at the results, still being displayed, when I noticed something different.

DNS Server Results Was the Problem

On the bad server, the first and almost unnoticeable result for DNS serves was the IPv6 address ::1. Where could that have come from? I never have configured any IPv6 values ever on any system. But, I checked the properties. For DNS servers, instead of automatic, the use this address was selected with a value of that ::1. I changed it to automatic.

Lo and behold, the Kerberos error disappeared. DNS and AD replication were back in business. DHCP almost was.

Somehow when I was trying to fix DHCP as a standalone problem, the zones that had been replicated from good server to bad server went poof. The bad server DHCP did not even show failover replication. The first thing I did was restore the backup (I always take a backup a few times) but that did not bring back the zones. When I tried to replicate from the good server, it failed because it could not find a zone.

So on the bad server, I added a zone, just the same start and end IP addresses as on the good server. Then replicate worked, and voila! the zone on the bad server was updated properly.

And now all is working well. Shame on me for not noticing this sooner as restoring the .vhdx files would have been an effortless fix. That’s a great advantage of virtual machines. But at least fixing this seemingly ghastly problem waa simple once I figured out the cause.

Hope this helps you if you ever need it.

Site-to-Site VPN Trendnet and Cisco

Posted: October 9, 2020 in SBS 2011

Trendnet TEW-829DRU Router

This router supports PPTP, L2TP, and IPSEc VPN settings. This post describes an IPSec site-to-site tunnel connection.

Cisco RV220W Router

The target router to the other site is this Cisco SMB router. I did pick the Trendnet but the Cisco was already installed, so I want to tell you how I got them to work properly. It required some tweaking.

While specific to these two routers, I believe any router in the RV Cisco family would behave the same and the same goes for routers other than the Trandnet one.

General Comments about IPSec Site-to-Site

In general, it’s necessary to create definitions on each side that mirror each other. For each side, you need to specify the external IP address and the internal LAN address of both the hosting site and the target site. Let’s call the host site Site H and the target site Site T.

Site H points to both its external IP address and it’s internal LAN address but also to Site T’s external IP address and internal LAN address. For Site T, the external site locally will have the same value as Site H’s target site and the same for the LAN address.

Site H external address 123.123.123.123 with LAN 192.168.20.0/24
Site T external address 121.121.121.121 with LAN 192.168.30.0/24

At Site H, local WAN is 123.123.123.123 and LAN 192.168.2-.0/24
remote WAN is 121.121.121.121 and LAN 192.168.30.0/24

At Site T, local WAN is 121.121.121.121 and LAN 92.168.30.0/24
remote WAN is 1223.123.123.123 and LAN 192.168.20.0/24

This should make it clear each site points to itself and the other site. NOTE: instead of an IP address, most routers will allow a FDQN that can resolve to one.

Two Phases of IPSec Negotiation

When the two sites attempt to establish a secure tunnel, there are two phases of negotiation that take place. On the Trendnet router, the parameters are described as Phase 1 and Phase 2, but on the Cisco, they are IKE and VPN Policies. Still, they address the same parameters.

To set up the site on Trendnet, follow these steps:

  1. Log on with administrator account.
  2. Navigate to Network->VPN and then click on the IPSec tab at the top.
  3. Enter a tunnel name (whatever you wish) and then click Add.
  4. This should take you to the General Settings page, and use the following options:
    • Connection type – Site-to-Site in drop down
    • Failover – leave as disabled for now
    • Authentication type – IPSec IKEv1 PSK from drop down
    • Enter local WAN and LAN subnet
    • Enter remote WAN and LAN subnet
    • Save and apply
  5. Now you should see that tunnel set up, and to the right of it, click EDIT.
  6. Back on the General settings, click Advanced tab at the top.
  7. Make the following changes:
    • Uncheck both Phase 1 and Phase 2 autoconfigure boxes.
    • Choose 3DES for the cipher algorithm.
    • Choose SHA1 for the hash algorithm.
    • Choose 2(1024-bit) for DH Exchange.
    • Use 3DES and SHA1 for Phase 2 transform algorithm and HMAC algorithm.
    • Leave PFS Exchange off.
    • Save and apply.
  8. Back on the main page, enter the pre-shared key. The longer and more complex, the better.

To set up the site on Cisco, follow these steps:

  1. Log on with administrator account.
  2. Navigate to VPN->IPSec->Basic VPN Setup.
  3. Select Gateway as the Connection type.
  4. Enter the tunnel name (does not have to match the other site name) and the pre-shared key you entered for Trendnet.
  5. Select IP address or FDQN and enter remote value for WAN; the local should already be filled out but you can change it.
  6. Enter remote and local LAN address and subnet mask. Use the lowest IP address for the subnet. For example 192.168.20.0 with subnet mask of 255.255.255.0.
  7. Save.
  8. Now navigate to Advanced VPN Setup and in the VPN Policy Table section, click ADD.
  9. Set the parameters as follows:
    • Policy name – I use the tunnel name but your choice.
    • Use IP or FDQN for remote endpoint..
    • Use subnet for Local IP and enter start IP and subnet mask.
    • Same for remote.
    • Leave all other parameters as default. Encryption should be 3DES and Integrity algorithm should be SHA-1.
    • In the Select IKE policy, chose the corresponding on you just set up and then click SAVE and then BACK.

All Done

At this point, if you have entered the correct IP values for both sites, the VPN should connect. You can check on the Cisco dashboard to see that they are. If for some reason they are not connected, check the system logs. The Cisco logs in particular are pretty informative step by step but the Trendnet logs also tell you what was proposed and accepted in Phase 1 and 2, so that could be helpful.

Once connected, you should have access to both the local and remote LANs. As a word of caution, though, browsing by NETBIOS name may not work without some additional setup to access DNS servers on both sides. Maybe that will be a topic for a subsequent post.

Let me end by saying two things:

  1. These are not the only parameters for IPSec that you could choose. However, when I did not use the default parameters on the Cisco, the tunnel connected but no traffic between the two sites was possible. With other routers this might not be the case.
  2. Just make sure that each site points to itself and the other, and that all the Phase 1 and Phase 2 parameters are the same.

Just When You Think It Can’t Happen

I was checking on the health of my server as I saw it had updates to apply, and to my shock and horror, I noticed that backups had been failing for weeks. Shame on me.

I applied the updates but before rebooting set about to fix the backup problem.  Of course, backup never really kicked off for me to fix.  It just sat at reading data for a few hours.  So I finally decided to reboot.

But — mouse clicks ANYWHERE did nothing.  I could not get to the start menu to restart.  But finally CTRL+AL+DEL worked and I clicked on the power icon to restart.  Phew.

Not so fast.  In fact, very slow.  After two hours, the screen still said shutting down and persisted for another hour.  So like any good IT person, I manually powered it off.  When I powered on to restart the first thing I got was NO BOOTABLE MEDIA.

So I tried….

I removed the SSD and mounted it via a USB kit onto a Windows 10 computer.  I ran Partition Assistant, and the boot, OS and data partitions were all there.  I browsed them and they looked fine.

I did a few things like using a USB flash drive with the Server OS to boot, then went to cmd prompt to do diskpart, bcdedit, bootrec and all that stuff.  Most things I really needed to work, like /rebuildbcd, just didn’t.

One thing seemed to make a difference.  I set the boot partition to active with diskpart, and the error went from no bootable media to no operating system found.  Led me to believe the issue was on the OS partition and turns out it was.

So I thought…

I would copy the OS and boot partitions from the Windows 10 system, then boot the OS install from the flash drive, install a new, clean OS in place of the boot and OS, then restore.  But no, glad I didn’t.  I would just have reinstalled a non-bootable OS again.

Then I found…

EasyRE from Neosmart.  A utility billed to automatically fix non-booting OS disks.  https://neosmart.net/EasyRE/  There are a handful of versions but I got the server version.  It downloads, after a $99 purchase, as an .iso file.  There is a handy utility to burn it onto a bootable DVD (or CD) media.  I then booted the server with it, worried at first about the tons of lines of meaningless to me stuff that scrolls, and eventually it boots into a very simple interface. The first option is Automatic Repair, which I selected, and then was able to choose the OS partition from a list of partitions that showed up.  Click on continue, and after about 15 seconds of actions scrolling by, it prompted a restart.  Which I did.  Which very happily rebooted.  Just fine.

Some Server Cleanup Needed

The first thing that I noticed was that my virtual machines were not running.  The drive letter for that partition had changed, and I had to shuffle them around.  VMs still would not start – hypervision not running.  To fix that, I did this:

  1. Removed Hyper-V from Windows features and rebooted.
  2. Re-installed BIOS even when current was correct.  Rebooted.
  3. Added Hyper-V role back and rebooted.

Now the network adapters for the VMs were invalid as new ones created when Hyper-V role readded.  I also had to change the real NIC adapters from dynamic to static IPs again.  After changing NIC in VM settings using new dropdowns.  everything started just great.

Lastly I went back to Windows Backup.  Discovered the OS drive had shadow copies disabled on all partitions, so i fixed that and backup ran just fine.

All’s well that ends well.  But don’t let your backups not work!

 


It Was Most Obvious in Outlook Emails

Probably because that was my most frequent activity.  The slowdowns started happening a few hours after I did a reboot, and another reboot seemed like the only solution.

The symptoms were particularly noticeable when I had misspelled a word in an email and tried to right-click to correct it.   That right-click would take 2-3 or up to 6-7 seconds to have a pop-up menu appear, and maybe as long again for a suggested spelling to appear.  It wasn’t just for spell checking, though.  A right-click to select options of any sort behaved the same way.

Then I noticed the same thing was happening in Word, Excel and PowerPoint.

Trying to Find the Problem

I noticed that when this was happening and I launched Task Manager, by CPU usage was approaching if not hitting 100%.  I tried to find a service or program that was causing this, but what I saw instead were a handful of tasks hitting 30%-40% but were all “normal” things – Edge, Windows Explorer, etc.  This went on for days and I couldn’t figure it out at all.

Then I Noticed Something

When I had Task Manager running and had clicked on CPU to show most active tasks, one that I hadn’t noticed before would just appear for a few seconds with high CPU utilization and then fade into lower utilization, again and again.  It was Windows Shell Experience Host.  I finally figured out that this high CPU usage had a most unusual cause….

What Caused It and the Fix

The surprise was the cause.  I had found a great Windows background slide show of astronomy photographs.  Not that Windows was start struck but rather that it was a slide show as background.  I changed it to a static picture (I also tried a solid color) and poof! The problem disappeared.

Why? I can’t speak to that.

You are welcome!


It’s not the first time Roboform and Edge stopped playing nice with each other.  When Edge first started getting updates, Roboform was a sometimes thing, a finicky  lover.  It certainly got better.

I had a Windows 10 update last night, and after rebooting, Roboform may have well just not been there at all.  I could click on it to fill in a password for Edge, but nothing happened. So I will make a loner story much shorter: here’s how to repair it.

  1. Right click on the Roboform icon in Edge and choose manage.
  2. Click the uninstall button and allow it to run.
  3. Close Edge.  Don’t forget this step!
  4. Open Microsoft store, search for Roboform and install.
  5. Reopen Edge.  You will find Roboform works again, no new settings required.

I also have it on Chrome and Firefox, so perhaps those hung onto the settings for me.


First an Upgrade in Place

I have several VMs running Windows Server 2016, and I wanted to upgrade to 2019.  An in place upgrade seemed like it was going to be effortless, and in fact, it was.  After the upgrade, however (and I foolishly picked a domain controller) I could not activate the product.  I called Microsoft activation – on a Sunday – and I was told that an in place upgrade just would not work and that’s why activation wizard rejected my product key.  But they did verify the product key was correct.

Then a clean install

To validate that theory, I did a clean install on a virtual machine.  Same thing.  The product key was rejected.  So I then grabbed a laptop that was not being used and did an install on that.  Same thing.  So an upgrade in place, a clean install on a virtual machine, and a clean install on a physical machine all did not work with the product key.

Command line to the rescue

Here is what did work

  1. Open cmd as administrator.
  2. Type SLMM /UPK
  3. Type SLMGR /IPK <product key all caps with dashes>
  4. Type SLMGR /ATO

And voila! 2019 is activated.

I have validated this same thing on Windows Server 2016 and feel confident it will work on Windows 10 although i was not tested.


The .vhdx file has a unique permission for the virtual machine ID

If you, like I did, copy your .vhdx files for safekeeping in addition to having a regular backup, you cannot simply copy them back to your virtual hard disk location and have the virtual machine successfully start.  You will see this error when you try to start the virtual machine:

The reason why is that the security permission for the file for the virtual machine is missing, not being copied along with the file and other permissions.

The Fix Is Straightforward

Essentially, you have to run PowerShell in order to add this permission back.  There is a very good summary of this information in this post .  However, the post leaves out a few important things you need to do in order for this to work.

First, you need to make sure PowerShell has the Hyper-V cmdlets installed.  It took me a while to figure this out.  When I tried to execute get-vm, it failed until i did the following step.

Get-WindowsFeature Hyper*
Add-WindowsFeature Hyper-V-PowerShell

Then I was able to successfully run the get-vm and icacls cmdlets.

Another Few Things

Yo can specify the full path to the virtual disk or you can change the directory path within PowerShell.  Just be sure to enclose the folder names in single or double quotes if there are spaces in the name.

As another point, after the VMID in the icacls cmdlet, be sure and include ‘:F” (no quotes) at the end before processing it.

It really is easier than it sounds.


Scenario

At a central location, a Windows Server hosts a handful of Hyper-V virtual machines and also hosts the QB data files.  Users who are remote access the virtual machines over a VPN using Remote Desktop to get the experience of running QB locally; it performs abysmally over a VPN when the data is at one end and the desktop at another.

As a side note, performance for both users on the LAN where the server is and the virtual machines starting degrading substantially; by putting the Hyper-V machines and virtual disks, and the QB files on a SSD, performance jumped abut 10-fold; it’s as though everyone is using a high speed workstation.

What Happened?

I cannot say for certain what went south but one of the virtual machines would no longer run QB and gave the 3371 error.  Uninstalling QB and re-installing did not change that.  What I did notice, however, was that Edge would not resolve any URLs and appeared to Windows 10 to be offline, but there was network connectivity. The start button didn’t work, and there were a big handful of updates pending download and install.

I finally did a network reset from the settings page, rebooted and then had to make some adjustments to the virtual machine settings.  Primarily, this was to change the MAC address back to the value it was in the DHCP reservations; I wanted all the virtual machines to have a fixed IP address for the users.  That worked, and after a few reboots, the installs started to download, but they all remained pending.  i kept restarting but nothing budged.  Finally, I set the time for a restart attempt a few minutes ahead, and that got the list of updates to install when it rebooted.  Edge worked again.  But QB did not, even after a remove/reinstall.

What Did Work

The ultimate fix was very easy.

  1. I opened Control Panel and chose File Explorer Options.  On the View tab, I selected Show hidden files and folders and deselected the four Hide check boxes, accepting the error warning for the last one.
  2. I closed Control panel and opened File Explorer. I could now see ProgramData and clicked on it, eventually navigating to ” C:\ProgramData\Intuit\Entitlement Client\v8“.
  3. In that folder there is an ECML file, and I renamed it to .emcl.old.
  4. I closed File Explorer and then ran QB.
  5. QB then asked what version type (this was manufacturing and wholesale), and that setup ran.
  6. Then QB opened without an error, and all is well again.

I should add that updates show current and Windows 10 is back to behaving.

Hyper-V Followup

The virtual machines are just running QB; users use their own desktops for other work, so the VMs are pretty static.  What I am now doing is making copies of the .vhdx files for the machines in a separate folder (on the SSD, but it doesn’t really matter; could be anywhere including the cloud).  I actually had a copy for this particular machine but it was about four months old, and I had a feeling that I could fix the problem so I pressed on.  But if a VM went south again soon, I just have to replace the .vhdx file with the saved copy.  Sure beats dealing with a handful of physical machines.  Oh, yeah, I didn’t even have to be there to do this.


When using PowerApps, you may often want to do what I have done: set the people picker field’s default value to the logged on user.  There are a handful of posts out there about what to do, but I didn’t find one that easily guided me to the right way to do this.

First, a bit of background you will need to know. A people picker field used for entering new data should be on a data card on a form on a screen.  Then you can modify the data card’s field attributes to get the current logged on user.

When you click on the data card, you will see a box border around several fields, one that displays the heading for the field and another that is the data itself.  Be sure and click on the data field to get to the correct tributes.  Here’s an important thing to note:

The data card is locked automatically when it is added to the form via a data source (SharePoint list, e.g.).  To modify it in any way, you need to first unlock it.  On the right hand PowerApps pane, note there is an Advanced tab to the far right. Click on that then click unlock near the top.

Now clock on the data field.  Note that at the upper left tool bar has a drop down selection list of field attributes in alphabetical order.  Choose DefaultSelectedItems and enter in the expression box next to the attribute name

{
‘@odata.type’:”#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser”,
Claims:Concatenate(“i:0#.f|membership|”,User().Email),
DisplayName:User().FullName,
Email:User().Email
}

The line spacing (i.e. carriage returns) are for readability only and you can use a continuous string of characters if you wish. The important thing is to make sure all of the actual characters above are entered correctly.

I was frustrated for a day or two with this not working.  The reason is that I was entering the expression in the Default or Items attribute and nothing was displayed in the people picker choice.

You can set the people picker to other AD values as well.  For example, to set it to the logged on user’s manager, use

{
  '@odata.type':"#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser",
   Claims:Concatenate("i:0#.f|membership|",Office365Users.Manager(User().Email).Mail),
   DisplayNameOffice365Users.Manager(User().Email).DisplayName,
   EmailOffice365Users.Manager(User().Email).Mail
}

I needed a quick PowerAppsapp to display some calculated averages based on documents created by an InfoPath form with recurring fields – imagine an expense report, and you will have imagined something similar.  There were some tricky steps to get this to work, so let me save you some time and trouble in getting there.

First, to connect PowerApps to a xml forms library data source, you first choose SharePoint then the site where the library is located, but when just lists appear, instead choose custom value (check the box) and enter just the library name. THE NAME CANNOT CONTAIN SPACES.  If your library name does contain spaces, you will have to create a new one and get the data into it.

Now add a gallery to the screen in PowerApps and put some fields onto the top datacard using labels.  Now if you want to add up the value of a field on each row of the gallery, create a label outside of the row datacards inside the gallery and set its text value to

Sum(Filter(dtasource.field, field > 0), field)

Or if the field can be negative, change the expression to !=, as skipping o values won’t change the sum.

Why sum(datasource.field) doesn’t work I don’t know, but it doesn’t.

Thanks are not necessary but always welcomed!


A client called in a panic because a sub-site was no longer showing on the link bar, in site contents, and no links to documents there worked any longer.  Vastly puzzling was that there was nothing in the recycle bin related to the sub-site or any of the files.  A search was ambiguous:  it would autocomplete in the search bar but then not find anything.

The real clue came from using a direct URL to get to the site which gave a 403 Access Denied error.  Why, I wondered, would there not be a site not found message from the browser?

It took a bit of snooping to find the problem, but here’s what to do.

  1. Go to the Admin Center in Office 365.
  2. Go to the Exchange Admin Center and choose the recipients tab, then choose the groups tab inside that.
  3. Scroll down through the groups and look for any that have been deleted.  Restore them one by one and see which one(s) bring access back.
  4. Done!

Because the sub-site was never deleted it wasn’t in the recycle bin and more importantly nothing was lost.  No one could get to it any longer was the only issue.  Restoring the groups fixed that.

Thee are probably two takeaways from this:

  • Limit administrators who might delete a group so that inadvertent deletion doesn’t happen;
  • Perhaps give an explicit full control permission to any sub-sites that have inheritance disabled on them so at least one admin can still get in.

The third takeaway might be to remember this tip.


Flow Is Amazingly Cool and Powerful

I have been working with Flow for a few weeks now, and I am delighted with all it can do.  There are some rough edges and some flaws, and I will address one of the rough edges now, and a flaw later on.

When a File Is Created (or updated)

This is the trigger you should use for launching Flow on a Formlibrary.  I’ll just concentrate of when it is created for this post.

So I started with Create from blank on the My Flows (https://flows.microsoft.com to get there).  Click on Search hundreds of connectors and triggers to get to the Flow creation, click on the SharePoint icon, and then on when a file is created (properties only).

In the site address, use the drop down arrow to see a list of sites for your Flow logon.  Chose the site where the forms library is contained.  For the SharePoint library name, follow the steps outlined below.

Getting Flow to the Forms Library

What you need to enter into this field is the ID of the SharePoint library.  You can find the library Id by using SharePoint Designer 2013.  Open it to the site, click on lists and libraries, then click on the forms library you want to use for Flow.  The list information page displays, and at near the top left, below Wed Address: is List ID:.  Copy the list id but do NOT include the {} enclosing it.

Now paste the ID into the library name field in Flow.  Dynamic content will magically appear so your Flow will have access to all of the form library fields/columns.

Thanks to…

I had some great help from Kerem Yuceturk and Stephen Siciliano at Microsoft on getting started with some advanced Flow features and especially this one.

My Unfavorite Flaws in Flow

It probably goes without saying that the interface to access forms libraries needs work.  I haven’t drilled down into dynamic content yet to determine if fields from repeating tables show up, and how one might aggregate data from them into, say, arrays.  My first Flow on a forms library did not require that.

What I hope gets expanded quickly is rich text editing in creating the body of an email.  You can insert HTML features like <b> but not my favorite thing.  Harder to make Flow accessible to larger groups of Office 365 users.

I also dislike intensely accessing functions if Flow.  Some take so many parameters and options that it becomes rather impossible to get them right without trial and error.  A richer build experience would be nice.  A lot nice.

Hope this is of some value to you guys.


While creating a submit data connection for a SharePoint forms library is a wizard process and allows selecting the forms library from a listing that the wizard supplies, that is not the way it works for a SharePoint list.  Instead, here’s what you need to do:

  1. From the Data tab, click on Data Connections and then on Add.
  2. In the wizard that opens, click on Create a new connection and also Submit Data.
  3. On the next page, enter a URL in this format:

    https://servername/site/lists/listname

    For SharePoint online, https://<orgname&gt;.sharepoint.com/<site>/lists/<listname>

  4. For file name, either leave it as Form or enter a field by clocking on Fx.  If you want to put some fields together (say lastname anad firstname) use the concat() fuction.
  5. If you want to be able to update data in a list item that already exists, check the overwrite box.  Otherwise leave it unchecked.

    NOTE: If you have an InfoPath form that is used to create as well as update items, you might want to create two different submit data connections, overwrite for one no overwrite for the other.  Use logic to determine which one to use.

  6.  On the next page, you can rename the submit connection or leave the default.

That’s all there is to it.  Finding this in a concise manner was just hard and required lots of trial and error.  No thanks necessary!!!


I developed an InfoPath form that, after being submitted by a user, also needed to be subsequently updated for several different approval levels.  That meant that the submit option had to allow for the form instance to be overwritten as it was approved (or rejected).  At the same time, users needed to be able to create a second submission for the same identifying fields (in this case, I concatenated displayname + date +counter, the last being a hidden field).  I had planned on doing a query on the form library to see if a record existed and if it did, bump the counter automatically  Alas, I discovered that a query on a forms library just isn’t going to work.

I felt stumped and frustrated.  If the user just submitted a new form, it would overwrite the previous one instead of creating a new instance.  I had a flash of inspiration that solved the problem.

Instead of having one data connection for a submit function (allow overwrite to get approvals to work), I added a second (don’t allow overwrites).  In the form logic, I was already detecting whether it was an initial form submission or a subsequent approval, so I changed the submit for a new form to use the no overwrite connection.  Now, if there is already a form for the user and the date, submit produces an error.  On the form, I have a checkbox that if the error occurs, the user knows to check and it ups the counter as part of the submit name.

I wish it were more sophisticated.  I would have preferred to query the form library and “know” the record exists and to increment the counter without involving the user.  I would prefer that InfoPath notify me if an error occurs and let me handle the exception.  But this is as good a compromise as I have come up with.

Perhaps you will post a comment if you have other ideas on how to solve this probelm.


Imagine my surprise to login after a Windows update yesterday and have no network connectivity!  Here’s what I discovered:

  • In adapter options in Network Connections, the only device that showed up was a connection from a VPN client software program that I had used earlier in the year.
  • In device manager, my network adapter devices were all  there, but in looking at the events in properties, the most current indicated additional actions were needed.
  • On another computer, I downloaded most current drivers, and then on the errant one uninstalled the adapters, installed the new drivers, re-scanned for new devices and they re-installed.

The end result was that nothing changed.  I was a bit desperate about trying to restore or rebuild this computer.  Then I tried something that was simple, and it worked.

In Programs and Features in Control Panel, I uninstalled the VPN software client (in reality I had two different ones, but chose the one that appeared in network connections).  I rebooted, and voila!  Everything was back to normal.

Subsequently I did a search and found a number of posts about upgrading from Windows 7 or 8.1 to Windows 10 and having an old Cisco VPN client installed, but nothing more recent.  These posts suggested doing what I did, so I guess I was intuitively lucky.  I hope this helps you from having any angst or requiring similar luck.


I was creating an InfoPath 2013 form that used a people picker to select a user.  I then wanted to get user data like title, first and last name, department, and so on.  I was stumped at how to get the data connection query to work with the people picker user rather than defaulting to the logged on user who was completing the form.  I could not find a single article that properly and easily explained how to do this.

Now you have one.

  1. Create the form with a people picker field.
  2. Add other fields for first and last name, department, title, manager, whatever you want. They can be hidden fields if you need them but don’t want to display them.
  3. Add a field for display name as well.
  4. Create a data connection from SOAP web service (data tab)  as follows:
    1. Enter URL for SharePoint. For Office 365, it is https://<office365name&gt;.sharepoint.com/_vti_bin/UserProfileService.asmx?WSDL.
    2. Choose GetUserProfileByName on the next wizard page.
    3. Click next twice to accept next two pages of default (make sure store copy unchecked on second page of those two).
    4. On the next wizard page, accept the default name or enter a new one, uncheck automatically retrieve data box, click Finish.
  5. On the display name field you created, right click on i and click on text box properties.
  6. In the default value field, click on the fx tab to the right, and follow these steps:
    1. Click on Insert Field or Group.
    2. In the list of fields from the SharePoint list, click on Show Advanced view to see a complete list.
    3. Now you will see the fields from Main, and you should see To as a field you can expand by clicking on the + next to it.  Then do the same to expand Person.  Now you should see DisplayName, AccountID, and AccountType fields.
    4. Chose DisplayName and click OK on all of the boxes that are open to save the default value.
  7. Now right click on that display name field for your form (Not the one you just selected as the default value!!!) and add a rule as an action rule, set a field’s value.
  8. For the field, you want to click on the field chooser to the right of the field value, then making sure the advanced view of fields is showing, click on the drop down box at the top of the list and change from Main to the connection name you created in step 4.
  9. Expand queryFields and then GetUserProfileByName to see Accountname.  Select it and click OK.
  10. Now you want to click on the fx button next to the value field.  Then click on Insert Field or Group, change to advanced view, expand the To field and the Person field, then select AccountId and click OK to close all of the boxes.
  11. Now add another rule to Query using the data connection, no choices necessary.
  12. Now you can set other fields to get the user data (title, department, etc.).  Use the same procedure as you would in specifying those fields as for the logged on user.:
    1. Add action to set field’s value.
    2. Chose the field you want the result in, and I use title for this illustration.
    3. For value, click insert field or group, change to advanced view.
    4. Change from Main to data connection list of fields.
    5. Expand all of the datafields until you see value, click on it to highlight.
    6. Now click on the Filter data button at the bottom left.
    7. Value should appear in the left hand field.  click on the drop down and and choose Select a Field or Group.  In the list that appears, click on Name and then OK to go back to filter settings.
    8. Now click on the drop down on the right hand side where the value is blank, select Type Text… from the list.  Now enter Title and click OK, then continue to close all the open wizard boxes.  You have completed the rule to set a field to the value of the title field for the people picker user selected.
  13. Use step 12 instructions for any other user fields you need.  Instead of Title enter FirstName, Department, Manager, etc. for the type of data you want.

Maybe it will help if I repeat this all as a verbal summary:

Set up to create an InfoPath form for a SharePoint list.  On that form, add a people picker box.  Add a text field to the form that can be hidden or displayed, as you wish.  Create a data connection to get user profile data by name.  Set the default value of the text field to be the DisplayName of the people picker fields.  Add action rules to the text field that will first set the AccountName of the data connection query to be the AccountID of the people picker.  Query the data as a next action item, then set field’s value(s) to be the AD information you require.

Hope this helps, and hope it opens up some new ideas for how to effectively use SharePoint for you.


I have often wanted to vary the backup schedule for Windows Backup, but the user interface only offered once a day or more than once a day.  I wanted it to happen not every day, and I finally discovered there was a way.

To get more options, you need to use Task Scheduler, which can be found under Administrative Tools in Control Panel.  When it opens, navigate to Microsoft->Windows-Backup.  You will see the schedule you set up in Windows Backup.  Highlight the schedule, right click and choose Properties.  Then click on the Trigges tab, and finally click on Edit.

When you select Weekly, you now have the option of choosing only certain days of the week instead of every day.  Or select monthly and see

Now you can choose days of the month.

There are more options on the other tabs to help you set the way the Backup task runs.  I hope this has greatly extended how you use Windows Backup


Scenario

I had quite a few Windows 10 client machines running Office 2016 and also using Office 365 online.  Email was fine and synchronized readily, but as soon as I connected either a SharePoint calendar or contact list, Outlook reported send and receive errors, gave an error window that said server authentication protocol not supported, or both.  Outlook might disconnect from Exchange Server or just not sync.

You might see the following:

outlook1

outlool

This was a frustrating error and reported quite often, but none of the proposed solutions actually worked for my bevvy of machines.  I owe a debt of gratitude to one of the IT staff members at Bellevue School District who snooped around and came up with a fix that does work.

Updates at Fault

A lot of online postings suggest, correctly, that one or more updates has produced this error, and removing them solves the problem.  However, a subsequent and replacement error just returns the error.  And Windows 10 just wants to install those updates for you.

The Fix

Delete the connected contact and/or calendar from Outlook.  Then close Outlook.

Run regedit.  Navigate to the following area and add this key as a 32-bit double word with a value of 0:

HKCU\SOFTWARE\Microsoft\Office\16.0\Common\Identity\EnableADAL

 

Once that is done, open Outlook and see if the same error occurs.  If not, go to Office 365/SharePoint/<calendar>or<contact> and re-connect to Outlook.

If the error persists, you will need to create a new Outlook profile.  If the existing profile has other connections and/or data files, be sure and keep that profile so you can add them properly to the new profile.  Once the new profile has been loaded and is syncing properly, you can go back and remove the original profile.

To make this even easier for you, create a text file but rename it to <something<.reg on your desktop or other convenient location.  Right click on it and choose edit, then paste in the following lines.  Save it, then click to open and it will add the key to your registry.

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\16.0\Common\Identity]
“Version”=dword:00000001
“EnableADAL”=dword:00000000

 


Scenario

I am in the process of migrating a SBS 2011 server to Windows 2012 R2.  It is mostly, but not entirely done, and some essential tasks have been deferred until time permits.  Both of these servers are Hyper-V VM instances.  The host server and both VM servers use iSCSI targets for a number of key disks.  The virtual machines and disks reside on such a volume.

In spite of a dedicated UPS for the host server and the iSCSI device, they both power recycled for some reason late last week.  I always takes the iSCSI much longer to reboot than the host server, and I expect a few minutes of delay until the VMs start.  However, when I checked later, the 2012 R2 server was not restarted but reporting a failure and asking to do a repair.  A few times trying that made no difference.

How I Fixed This

I selected the tools option on the failure start screen and tried starting in safe mode.  No luck, it still failed.  I also tried low video resolution, same problem.  Then to my delight selecting  directory services restore mode allowed a successful boot.  That made me realize that the NTDS database was probably corrupted.  NOTE:  you will have to logon with a local administrator account.  AD does not start and none of the credentials in it are available.

The first thing I tried was to navigate to the database folder, C:\Windows\NTDS.  I copied the folder contents to C:\Windows\NTDS\Save after creating that folder, then from an elevated command prompt, ran ntdsutil and then the following commands

  • files <enter>
  • info <enter>  This will list the files for the database and logs
  • compact to <full path>  You probably want to create a new folder and provide path to it.
  • quit twice to return to the command prompt

Ideally, you will have a new and well formed NTDS.DIT file in that directory, and you should copy it to C:\Windows\NTDS and overwrite the corrupted file.  Don’t worry about losing anything since you have a copy saved.

Now reboot your computer and it should start normally.

WELL MINE DID NOT!

I was so focused on getting my server back that I can only vaguely recall that the compact command did not work, saying there were log files that had not been applied.  Well, it thought that is what compact would do.  Or maybe it did and the server still did not restart properly.

In any case, I switched to using Esenttutl instead of ntdsutil.

Run an elevated command prompt and type

  • esentutl /g c:\windows\ntds\ntds.dit
  • esentutl /r c:\windows\ntds\ntds.dit

The first is an integrity check, and mine predictably failed.  The second is a recovery command, and that, too, failed with a JET database engine error. So I ran the repair option, /P, instead of /R on the command line.  Voila!  It completed successfully and I reboot to a normal windows server.

So What Was That All About?

In general, Windows databases do not update directly but write transaction log files.  Later, these log files are “played back” and make the actual transactions update the database itself. When an unexpected shutdown occurs, as in my case, it is possible that the database does not close properly and has a corrupted element somewhere in it.

Esentutl is also used for Exchange databases if they become corrupted, and it has saved me many times with SBS errors.  While I was hoping the /R recovery function would work, I was not particularly worried about the /P repair option, and it did work.

You might ask yourself, why didn’t I just restore the directory from the last backup?  Remember those tasks not yet done?  Er, server backup was the next item on the to-do list.  Happy to say it has now been done.